AI

Google launches Cloud AI Platform Pipelines in beta to simplify machine learning development

Google Cloud Next 2019

Google Cloud Next 2019

Image Credit: Khari Johnson / VentureBeat

Google today announced the beta launch of Cloud AI Platform Pipelines, a service designed to deploy robust, repeatable AI pipelines along with monitoring, auditing, version tracking, and reproducibility in the cloud. Google’s pitching it as a way to deliver an “easy to install” secure execution environment for machine learning workflows, which could reduce the amount of time enterprises spend bringing products to production.

“When you’re just prototyping a machine learning model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make a [machine learning] workflow sustainable and scalable, things become more complex,” wrote Google product manager Anusha Ramesh and staff developer advocate Amy Unruh in a blog post. “A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis, to training, to evaluation, to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner — for example, in a set of notebooks or scripts — and things like auditing and reproducibility become increasingly problematic.”

AI Platform Pipelines has two major parts: (1) the infrastructure for deploying and running structured AI workflows that are integrated with Google Cloud Platform services and (2) the pipeline tools for building, debugging, and sharing pipelines and components. The service runs on a Google Kubernetes cluster that’s automatically created as a part of the installation process, and it’s accessible via the Cloud AI Platform dashboard. With AI Platform Pipelines, developers specify a pipeline using the Kubeflow Pipelines software development kit (SDK), or by customizing the TensorFlow Extended (TFX) Pipeline template with the TFX SDK. This SDK compiles the pipeline and submits it to the Pipelines REST API server, which stores and schedules the pipeline for execution.

Google Cloud AI Platform Pipelines

Above: A schematic of Cloud AI Platform Pipelines.

Image Credit: Google

AI Pipelines uses the open source Argo workflow engine to run the pipeline and has additional microservices to record metadata, handle components IO, and schedule pipeline runs. Pipeline steps are executed as individual isolated pods in a cluster and each component can leverage Google Cloud services such as Dataflow, AI Platform Training and Prediction, BigQuery, and others. Meanwhile, the pipelines can contain steps that perform graphics card and tensor processing unit computation in the cluster, directly leveraging features like autoscaling and node auto-provisioning.

AI Platform Pipeline runs include automatic metadata tracking using ML Metadata, a library for recording and retrieving metadata associated with machine learning developer and data scientist workflows. Automatic metadata tracking logs the artifacts used in each pipeline step, pipeline parameters, and the linkage across the input/output artifacts, as well as the pipeline steps that created and consumed them.

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

In addition, AI Platform Pipelines supports pipeline versioning, which allows developers to upload multiple versions of the same pipeline and group them in the UI, as well as automatic artifact and lineage tracking. Native artifact tracking enables the tracking of things like models, data statistics, model evaluation metrics, and many more. And lineage tracking shows the history and versions of your models, data, and more.

Google says that in the near future, AI Platform Pipelines will gain multi-user isolation, which will let each person accessing the Pipelines cluster control who can access their pipelines and other resources. Other forthcoming features include workload identity to support transparent access to Google Cloud Services; a UI-based setup of off-cluster storage of backend data, including metadata, server data, job history, and metrics; simpler cluster upgrades; and more templates for authoring workflows.

store

Terminal taps AI to help companies build remote engineering teams

Terminal recently set up its first international remote engineering hub in Mexico

Above: Terminal recently set up its first international remote engineering hub in Mexico

The COVID-19 crisis has thrust the issue of remote working to the top of corporate agendas globally, with online communication and collaboration tools gaining traction. But many companies already operated remote “distributed” workforces — WordPress parent Automattic has long embraced remote working, as have GitLab, GitHub, and Basecamp. Last year, payments giant Stripe launched a new remote engineering hub to help it access a bigger global pool of tech talent.

And then there is Terminal, which helps startups build remote engineering teams by doing all the heavy lifting for them. Terminal operates strategically placed engineering hubs that can be used by U.S. companies to tap technical talent abroad. But the physical hubs are only part of the story, as many of Terminal’s clients already operate entirely remotely but use Terminal to sidestep the logistical headaches that come with hiring in other countries, including navigating local legal and tax structures, running recruitment processes, and operating essential services such as payroll and HR. It’s all about helping companies scale their engineering teams without requiring staff to relocate.

With the current pandemic likely forcing companies to embrace remote working long into the future, if not permanently, Terminal could find itself in high demand.

“Our campuses have always served as a meeting point for teams to gather, brainstorm, and explore creative solutions but were never mandated as part of the Terminal experience,” CEO Clay Kellogg told VentureBeat. “We have clients today that are completely remote and have found [that] our processes, tools, and systems — especially around building distributed cultures — have been vital to ease the transition. We expect this to be a growth area as a result of COVID-19.”

The AI factor

Terminal, which launched out of stealth back in 2017 and has two corporate hubs in San Francisco and New York, counts several tech-focused campuses throughout Canada and Mexico. This week, Terminal announced its first acquisition, snapping up Austin, Texas-based AI-powered recruitment startup Roikoi for an undisclosed figure. This comes shortly after Terminal raised $17 million from big-name investors such as Kleiner Perkins, adding to a previous roster of investors that includes Yahoo founder Jerry Yang and Peter Thiel, via his Thiel Capital VC firm.

As a result of the acquisition, Roikoi as a brand is no more, and the service is now only available as part of Terminal, where it is now deployed as an automated referral-sourcing tool called the Terminal Talent Graph.

The Talent Graph helps Terminal engineers identify and “vote up” top talent in their professional networks, with the Roikoi engine automatically matching the mostly highly ranked engineers with open positions across all the hiring companies in Terminal’s client base, which include Hims & Hers, Bluescape, and Nextdoor. The platform also features built-in engagement and outreach tools to enable conversations with candidates.

Using natural language processing (NLP), the Terminal can now scan job descriptions and resumes to find the best matches, parsing specifics such as technical expertise, location, years of experience, and employment history, among other factors. The scoring algorithm adopts a heuristic model to identify the best candidates, with machine learning (ML) deployed to train the system based on past performance in successfully matching candidates with positions.

Above: Roikoi’s dashboard showing referrals

The timing of the acquisition is certainly notable, coming as many technology companies — large and small — will find it difficult to hire. The tech talent pool in the U.S. was already stretched, and social distancing and travel restrictions have only compounded the situation.

While remote working was already a growing trend, COVID-19 seems to have accelerated the push toward cloud-based collaboration and communication. Earlier this week, Alibaba revealed plans to invest $28 billion in cloud infrastructure over the next three years, while Amazon’s first African datacenters recently opened for business (though this was already in the works). Elsewhere, Zoom’s video-conferencing user base has gone through the roof, jumping from 10 million daily users in December to more than 300 million today.

“A remote inflection point was coming with the rise of cloud platforms and global nature of talent, but no one could have predicted the acceleration of the shift caused by COVID-19,” Kellogg added. “The global shutdown has ushered in sudden investment in infrastructure to enable distributed operations and a new generation of remote workers and managers.”

Then there is the issue of immigration. The U.S. has adopted an increasingly hardline approach that has only been exacerbated by the COVID-19 pandemic. President Trump this week enforced an emergency suspension of some immigration green cards due to the pandemic. The ensuing uncertainty could impact the global talent pool available to U.S. tech companies — Duolingo cofounder and CEO Luis von Ahn said the green card restrictions may force it to “open offices and move jobs” outside the U.S.

There are other factors at play here, including the issue of climate change. Air pollution has dropped substantially as cities have entered lockdown, leading major conurbations such as Milan to rethink their use of cars — 35 kilometers (22 miles) of streets will be turned over to cyclists and pedestrians once the Italian city’s lockdown lifts.

Once we emerge from the COVID-19 crisis, it’s likely companies and governments will retain certain protocols established during lockdown, whether in the name of reducing carbon emissions or opening to a much broader tech talent pool. Just yesterday, recruitment search engine Indeed rolled out a new feature that allows employers to specify whether a job they’re advertising is remote, be that permanently or temporarily due to COVID-19.

“Accessing global talent outside of a company’s traditional HQ market has always been the Terminal approach, but now we’re seeing a lot more support for this way of thinking and a dispelling of myths about loss of productivity,” Kellogg added. “This is opening up new doors for us with more traditional leaders who didn’t support remote work but now understand the many benefits of doing so. As the economy recovers, everyone expects remote workers to lead the road to recovery.”

Trustpilot: Confidence in consumer reviews dips amid censorship and fake news

Trustpilot

The past decade has seen a sharp drop in trust of institutions such as the government and media. But while people are turning to other people for information, one of the most critical online tools they use for sharing information, online reviews, is also seen as increasingly suspect.

A report by online review site Trustpilot and behavioral analytics firm Canvas8 found that while 90% of U.S. consumers now read some kind of online reviews before purchasing a product, about half of those people believe the reviews were likely being manipulated in some fashion.

The findings come as brands and news organizations are struggling to battle a growing cynicism that has been fed by claims of fake news and misinformation campaigns. The battered reputation of online reviews reflects the growing lack of trust in information found online and could present a critical hurdle to companies seeking to ramp up their ecommerce operations.

“I think one of the things that defines the trust crisis that we live in is the state of online reviews,” said Peter Mühlmann, founder and CEO of Trustpilot. “If I compare today to 10 years ago, people are increasingly realizing the need to trust online institutions. But I think the consumer awareness around the problem with fake reviews is hurting many brands.”

Online reviews have always had a somewhat mixed reputation. Yelp, for instance, helped popularize user reviews of local businesses but at the same time has faced criticism from merchants who sometimes said they felt pressured to pay for services to ensure good ratings. Amazon has also attempted to weed out fake reviews by tracking people’s connections via their social media accounts.

Even as these companies took steps to improve comment quality and reliability, embedding online reviews across just about any ecommerce site became standard. Trustpilot, founded in 2007 and based in Denmark, has raised $198 million in venture capital, including a $55 million round last year. The company has developed a platform that employs a clear set of rules with artificial intelligence to ensure that reviews are transparent and reliable.

Still, the broader world of reviews remains a murky place, which casts a negative light on the whole practice. Part of the problem, according to Mühlmann, is that efforts by companies to manage their online reputations can include strategies that feel devious to customers, and so end up backfiring.

As an example, Mühlmann cited what he refers to as “fake reviews 2.0.” These involve real customers and their real opinions but organized in a way to mislead. For instance, maybe a business invites a customer to share their opinion after immediately purchasing an item but doesn’t give them a way to update that rating a year later if things have gone sour.

Another common tactic is to only invite customers the company believes had a positive experience to leave a comment. Creating expiration dates for bad comments is another tool. And despite efforts by companies such as Amazon to crack down on paid reviews, companies continue to find a way around this through schemes such as hunting down positive reviews on their Facebook pages and then either encouraging or paying those people to review them on other sites to improve their rankings.

“In the last two or three years, we’ve been asking ourselves how our data is being used,” he said. “We also need to ask ourselves how our opinions are being used.”

In all of these cases, the company risks breaking faith with the customers who left comments, even positive ones, if they feel later they were used to paint an overly rosy pictures. They could also feel their time was wasted.

By the same measure, online reviews have become an essential part of the shopping experience. Simply eliminating them could also raise questions in the minds of potential customers.

“Opinion sharing became something that we really take for granted,” Mühlmann said. “I like to say that today it’s impossible to buy a pizza without having to review the company, the delivery person, and the app. Not being asked about your opinion is considered weird. And not being able to find the customers’ reviews is really frustrating.”

They key in this case is transparency. Mühlmann said being open about rules for who leaves comments and how they are treated is crucial. Of course, that requires companies to accept public criticism, something many are still not accustomed to or comfortable with.

But Mühlmann said such attitudes are not giving customers enough credit. When a company can show it has responded to a sharp critique or a complaint, he says, that often leaves a positive impression with potential customers.

“It’s important that businesses realize in the future that it matters how you are collecting opinions,” he said. “Consumers understand that things go wrong. So the absence of negatives in your comments can seem like a bigger problem. But businesses that turn problems into opportunities will be rewarded. It doesn’t matter what you are buying. You want to see what other people are dissatisfied with, and then decide if you can live with that and how the company reacts.”

With ecommerce purchases getting a boost during coronavirus lockdowns, many consumers are now trying out services such as grocery or meal deliveries for the first time. Many of them will first turn to online reviews, and that is the first place where companies can start to build a sense of confidence among those potential buyers, he said.

“The trust contract in the online society has gotten a whole lot more important,” Mühlmann said. “If there’s not trust, then the transaction can’t take place.”

Intel CEO: Bad companies are destroyed by crises ... great companies are improved by them

Intel's former CEO Andy Grove talked about crises' impact on companies.

Above: Intel's former CEO Andy Grove talked about crises' impact on companies.

Image Credit: Intel

Intel CEO Bob Swan cited a quote from former CEO Andy Grove as particularly apt during the pandemic. In a call with analysts, Swan noted that Grove once said, “Bad companies are destroyed by crises; good companies survive them; great companies are improved by them.”

Swan made the remarks after reporting what he said were “outstanding” results for both earnings and revenues in the “incredibly challenging” first quarter. But investors were spooked and drove the stock down 5% in after-hours trading, in part because Intel decided not to offer full financial guidance for all of 2020, due to uncertainties in the market.

Intel also said its gross profit margins, or the money it makes on the sale of its products, would likely be lower in the second quarter. That is in part because the company is recording higher expenses as it prequalifies the manufacturing of its second generation of 10-nanometer products — which is considered a normal expense in a process technology transition. Intel is accelerating the ramp to its code-named “Tiger Lake” 10-nanometer processors at a faster rate than it previously expected. Again, that may have spooked investors, even though Intel predicted its second-quarter results would be better than anticipated.

Intel grew its datacenter business 34% in Q1, and data-centric revenues are now 51% of total revenues, while PC revenues grew 14%. The company’s factories are operating at more than 90% when it comes to on-time deliveries. Only essential personnel are going into those factories, but Swan said the facilities — because of requirements for purity in manufacturing — are among the cleanest places in the world. Intel saw its supply chain affected in January, but those partners are now back to work and production is increasing every week.

Bob Swan is CEO of Intel.

Above: Bob Swan is CEO of Intel.

Image Credit: Dean Takahashi

Intel said it has pledged $100 million in funding to support its 110,000 employees. It has also pledged $50 million in resources and cash to fight the coronavirus. The company has paused a few construction projects at smaller sites, but Swan foresees no impact on process technology or product launches.

“I want to thank and commend all the Intel employees and supply chain partners who have helped keep our business operating during this unprecedented challenge,” Swan said. “I want to give special praise to those working in our factories and labs and other on-site personnel who have role-modeled the values of our company every day and every shift — I am so incredibly proud of your effort and commitment.”

He said Intel continues its strategy of widening its market opportunity by making more kinds of chips that go into electronic systems and computing products, such as graphics chips and Optane memory.

Intel repurchased $4.2 billion in Q1 and stopped all share repurchases on March 24. It also raised $10.3 billion in debt to prepare for a rainy day.

When it comes to fighting the current crisis and any future pandemics, “COVID-19 has only reinforced how important it is for Intel and our customers to accelerate the power of data,” Swan said.

He also said that strong demand for laptops in Q1 — for working from home and learning from home — was offset by the pandemic’s impact on global gross domestic product (GDP). Swan added that government and enterprise spending is likely to be weaker in the second half of the year. At some point, Intel expects the pandemic to affect global demand for PCs during the remainder of the year.

“We recognize that our local and global communities need us to continue delivering technology to help overcome this COVID-19 challenge, and we’re fully focused on that task,” Swan said.

He closed by saying, “Our purpose is to create world-changing technology that enriches the lives of every person on Earth. That’s never been more important than now … We will emerge from this global crisis even stronger.”

Intel reports Q1 2020 revenue of $19.8 billion, up 23% despite coronavirus

Bob Swan is the 7th CEO of Intel.

Above: Bob Swan is the 7th CEO of Intel.

Image Credit: Intel

Intel reported 69% year-over-year adjusted earnings growth and 23% year-over-year revenue growth for the first quarter, beating Wall Street’s targets for financial performance in a quarter that was affected at the end by the global pandemic.

The earnings report from one of the world’s biggest chipmakers is an important bellwether for the tech industry. But it’s also an early clue to how the coronavirus will affect the broader tech ecosystem. Most of January and February had normal results, but in March the early impact of COVID-19 was clearly evident.

Intel reported first-quarter revenues of $19.8 billion, up 23% from a year ago and driven by 33% growth in the data-centric business and 14% growth in PC revenues compared to a year ago. Adjusted earnings per share hit $1.45, up 63% from 89 cents per share a year earlier. Analysts expected the company to report profits of $1.27 a share on revenue of $18.7 billion.

Intel CEO Bob Swan said in a statement, “Our first-quarter performance is a testament to our team’s focus on safeguarding employees, supporting our supply chain partners, and delivering for our customers during this unprecedented challenge. The role technology plays in the world is more essential now than it has ever been, and our opportunity to enrich lives and enable our customers’ success has never been more vital. Guided by our cultural values, competitive advantages, and financial strength, I am confident we will emerge from this situation an even stronger company.”

The datacenter group saw revenue up 43% from a year ago, with 53% growth in cloud-service provider revenue. Intel’s memory and Mobileye businesses saw record quarters. The PC-centric business grew 14% versus a year ago, exceeding Intel’s expectations, on improved CPU supply and demand strength as consumers and businesses are relying on PCs for working and learning from home.

Intel’s stock price is down 3.74% to $56.83 a share in after-hours trading. Analysts had expected Intel to report full-year earnings per share of $4.83 on revenues of $72.4 billion.

The chipmaker also faces bigger competition than ever before from rival Advanced Micro Devices (AMD), which has more competitive chips across the board than it has ever had, thanks to its Zen 2 chip architecture and 7-nanometer manufacturing with partners like TSMC. AMD has steadily gained market share on Intel in the core processor market.

What a difference a quarter makes. The last earnings call took place in January after the massive CES gathering in Las Vegas, where thousands of companies showed off new wares. Now you can’t even walk down the Las Vegas Strip.

During the quarter, Intel donated $50 million in resources and cash to fight the coronavirus. It also launched its 10th Gen Core H-Series laptop processors. On March 24, Intel suspended share buybacks in light of the pandemic. The dividend is unchanged.

For the second quarter, Intel expects a sequential dip in revenue to $18.5 billion, with a non-GAAP operating margin of 30% and earnings per share of about $1.10 a share. Analysts had previously estimated revenues of $17.97 billion.

Patrick Moorhead, analyst at Moor Insights & Strategy, said in a message:

Intel had a stellar Q1 revenue-wise as it increased 23% year on year and better than guidance. The datacenter group showed significant strength with an eye-watering 43% growth demonstrating continued growth from cloud, carriers, and enterprise. PCs were up 14% driven by strong sell-in to meet anticipated demand from work/school/govern from home. Like other tech giants, Intel pulled its annual guide, but it did keep it for Q2, which I thought was pretty good. I think investors will want to know more about the gross margin percent drop in Q2, but I think it’s likely costs associated with 10-nanometer Tiger Lake qualification.

Magic Leap's consumer retreat is good news for the AR/XR industry

Opinion
Magic Leap 1 is a tool for 3D visualization.

Above: Magic Leap 1 is a tool for 3D visualization.

Image Credit: Magic Leap

Although I cover the VR and AR industries with genuine enthusiasm, writing about Magic Leap has never been fun. The company radically overpromised what it would deliver as a user experience, created a product too expensive for consumers to buy, and barely found third-party developers willing to support its platform with apps. On the (very rare) occasions I’ve actually seen Magic Leap headsets in public, they’ve proved fidgety to wear, with small “augmented” viewing areas and only modestly interesting software.

Yesterday, we reported that Magic Leap was laying off an unspecified number of employees and sharpening its focus on enterprise opportunities, but its blog post didn’t do justice to what was happening. As affected employees took to social media, it became clear that the company was shuttering its consumer business and eliminating half of its workforce — around 1,000 people. “Pivot” is way too soft of a word to describe the carnage, and my heart broke for all the people whose jobs were lost yesterday.

It would be easy to describe the news as a sign of some broader disinterest in AR or mixed reality (XR) technologies, but that’s clearly not true. The coronavirus pandemic has made clear that XR devices will be highly useful for helping people work at home and participate in social gatherings without being physically present. Over the next few years, a CEO’s ability to lead a meeting holographically may be as plausible as Emperor Palpatine directing his minions from afar — science fact following science fiction, and possibly even becoming a symbol of tech savvy and status.

As much as I respect some of the talented people who work (or worked) at Magic Leap, it hasn’t ever struck me as a viable consumer AR device maker. I’ll give the company bravery points for daring to dream so big on augmented reality that it needed to create not only the glasses but also its own operating system, computing device, ecosystem, and “Magicverse” to power them. It also wins a boldness award for actually securing the funding, people, technologies, and manufacturing necessary to bring its vision to life. But throwing all the money in the world at Magic Leap was not going to result in a consumer product.

For the larger mixed reality industry, the problem was that much of the actually available XR investment money in the world had already been thrown at Magic Leap. XR startups routinely (and sometimes loudly) complained that they couldn’t get much or any funding for their projects because Magic Leap’s grand vision had sucked all the cash and energy out of the investing community. So there was a zero-sum game here to some extent; by exiting the consumer space, Magic Leap is simultaneously making its former consumer talent available to other XR companies and enabling investors interested in consumer AR to place their bets elsewhere — whenever they’re ready to spend cash again.

Magic Leap had a lot of hurdles to overcome as a consumer business, but most of them come down to the financial consequences of a tech startup trying to own the “whole stack” rather than just a piece of the pie. Instead of just getting into the AR glasses business, which has cost even the trillion-dollar Apple years of time and untold cash, Magic Leap built (and then had to convince people to buy) the computer and OS to power them. Average consumers might be willing to spend $500 on AR glasses, but Magic Leap set its initial price for developers at $2,300, then offered a $3,000 package in an effort to cover ongoing service costs.

For consumers, those numbers screamed “no way” from the start and never improved. If Magic Leap had relied on an existing mobile platform — iOS or Android — it could have focused on just getting the glasses right. That’s what Nreal is doing with its Light consumer AR glasses, which presume users will have a Qualcomm Snapdragon-powered phone running Android, enabling the glasses to hit a $500 price point. Developers who want to support Light can just build Android apps with Nreal-specific features, which makes a lot of sense, particularly for XR companies with limited coding resources.

Apart from Nreal, many companies in the XR space could benefit from an outflux of Magic Leap resources. Niantic — a proven success in smartphone-based AR — is apparently building its own hardware and software solution with support from Qualcomm, and it could pick up some of Magic Leap’s gaming talent. Lots of smaller developers could pick up individuals who have spent years of their lives working to solve big and little problems in the AR space ahead of broad consumer adoption.

And don’t shed a tear for Magic Leap. It has had billions of dollars at its disposal to tackle virtually every conceivable aspect of how augmented reality will impact daily life — a subject Apple, Facebook, and other well-funded tech companies continue to pursue in earnest, guaranteeing that AR will indeed become a big deal soon enough. Even after the layoffs, the company still has plenty of money, a thousand people, and major corporate backers (such as Google and AT&T) to help it focus on enterprise AR, which is more than most of its glasses-making rivals can say.

Magic Leap’s troubles aside, my gut feeling is that everything’s going to work out just fine for the XR industry as a whole. It’s clear at this point that the Magicverse won’t be at the center of everything. But once the dust settles, expect a big bang-class multiverse of competing and perhaps more viable visions, backed by more practical and affordable AR hardware.

Google claims its AI can design computer chips in under 6 hours

Google AI logo
Image Credit: Khari Johnson / VentureBeat

In a preprint paper coauthored by Google AI lead Jeff Dean, scientists at Google Research and the Google chip implementation and infrastructure team describe a learning-based approach to chip design that can learn from past experience and improve over time, becoming better at generating architectures for unseen components. They claim it completes designs in under six hours on average, which is significantly faster than the weeks it takes human experts in the loop.

While the work isn’t entirely novel — it builds upon a technique proposed by Google engineers in a paper published in March — it advances the state of the art in that it implies the placement of on-chip transistors can be largely automated. If made publicly available, the Google researchers’ technique could enable cash-strapped startups to develop their own chips for AI and other specialized purposes. Moreover, it could help to shorten the chip design cycle to allow hardware to better adapt to rapidly evolving research.

“Basically, right now in the design process, you have design tools that can help do some layout, but you have human placement and routing experts work with those design tools to kind of iterate many, many times over,” Dean told VentureBeat in an interview late last year. “It’s a multi-week process to actually go from the design you want to actually having it physically laid out on a chip with the right constraints in area and power and wire length and meeting all the design roles or whatever fabrication process you’re doing,” said Dean. “We can essentially have a machine learning model that learns to play the game of [component] placement for a particular chip.”

Google chip AI

Above: Placements of Ariane, an open-source processor, as training progresses. On the left, the policy is being trained from scratch, and on the right, a pre-trained policy is being fine-tuned for this chip. Each rectangle represents an individual macro placement.

Image Credit: Google

The coauthors’ approach aims to place a “netlist” graph of logic gates, memory, and more onto a chip canvas, such that the design optimizes power, performance, and area (PPA) while adhering to constraints on placement density and routing congestion. The graphs range in size from millions to billions of nodes grouped in thousands of clusters, and typically, evaluating the target metrics takes from hours to over a day.

The researchers devised a framework that directs an agent trained through reinforcement learning to optimize chip placements. (Reinforcement learning agents are spurred to complete goals via rewards; in this case, the agent learns to make placements that will maximize cumulative reward.) Given the netlist, the ID of the current node to be placed, and the metadata of the netlist and the semiconductor technology, a policy AI model outputs a probability distribution over available placement locations, while a value model estimates of the expected reward for the current placement.

In practice, starting with an empty chip, the abovementioned agent places components sequentially until it completes the netlist and doesn’t receive a reward until the end, when a negative weighted sum of proxy wavelength (which correlates with power and performance) and congestion is tabulated (subject to density constraints). To guide the agent in selecting which components to place first, components are sorted by descending size; placing larger components first reduces the chance there’s no feasible placement for it later.

Google chip AI

Above: Training data size versus fine-tuning performance.

Image Credit: Google

Training the agent required creating a data set of 10,000 chip placements, where the input is the state associated with the given placement and the label is the reward for the placement (i.e., wirelength and congestion). The researchers built it by first picking five different chip netlists, to which an AI algorithm was applied to create 2,000 diverse placements for each netlist.

In experiments, the coauthors report that as they trained the framework on more chips, they were able to speed up the training process and generate high-quality results faster. In fact, they claim it achieved superior PPA on in-production Google tensor processing units (TPUs) — Google’s custom-designed AI accelerator chips — as compared with leading baselines.

“Unlike existing methods that optimize the placement for each new chip from scratch, our work leverages knowledge gained from placing prior chips to become better over time,” concluded the researchers. “In addition, our method enables direct optimization of the target metrics, such as wirelength, density, and congestion, without having to define … approximations of those functions as is done in other approaches. Not only does our formulation make it easy to incorporate new cost functions as they become available, but it also allows us to weight their relative importance according to the needs of a given chip block (e.g., timing-critical or power-constrained).”

Google launches Android 11 Developer Preview 3 with app exit reasons, ADB Incremental, and wireless debugging

Android 11 Developer Preview logo

Google today launched the third Android 11 developer preview with app exit reasons updates, GWP-ASan heap analysis, Android Debug Bridge (ADB) Incremental, wireless debugging, and data access auditing. You can download Android 11 DP3 now from developer.android.com — if you have the previous preview, Google will also be pushing an over-the-air (OTA) update. The release includes a preview SDK with system images for the Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL, Pixel 3a, Pixel 3a XL, Pixel 4, and Pixel 4 XL, as well as the official Android Emulator.

Google launched Android 11 DP1 in February, the earliest Android developer preview it has ever released, and Android 11 DP2 in March. Last year, Google used the Android Beta Program, which lets you get early Android builds via over-the-air updates on select devices. This year, however, Google is not making the first few previews available as betas (you’ll need to manually flash your device). In other words, Android 11 is not ready for early adopters to try, just developers. Like DP1 and DP2, Android 11 DP3 is only available on eight Pixel phones. That’s a tiny slice of the over 2.5 billion monthly active Android devices — the main reason developers are eager to see what’s new for the platform in the first place. Google will likely release Android 11 to more phones with the first beta. To help Google get there, you can give feedback and report bugs here.

Android 11 wireless debugging

Android 11 DP1 brought 5G experiences, people and conversations improvements, Neural Networks API 1.3, privacy and security features, Google Play System updates, app compatibility, connectivity, image and camera improvements, and low latency tweaks. DP2 built on those with foldable, call screening, and Neural Networks API improvements. DP3 adds three new features and makes two improvements to existing ones.

Android 11 DP3 features

Here’s the rundown of what’s new (see diff report and release notes) in Android 11 Developer Preview 3:

  • App exit reasons updates: Android 11 has an exit reasons API that helps you figure out why an app exited, including crashes, system kills, and user actions. DP3 brings a few updates based on developer input.
  • GWP-ASan heap analysis: A sampling allocation tool that detects heap memory errors with minimal overhead or impact on performance. GWP-ASan runs by default in platform binaries and system apps, and you can now enable it for your apps as well. If your app uses native code or libraries, it’s another way to find and fix memory safety issues.
  • ADB Incremental: Installing large APKs (2GB+) from your development computer to an Android 11 device is now up to 10x faster. To use this new developer tool, you’ll need to sign your APK with the new APK signature scheme v4 format, and then install your APK with the updated ADB command line tool. In DP3, ADB Incremental only works with Pixel 4 and Pixel 4 XL due to a required file system change at the device level.
  • Wireless Debugging: A completely revamped debugging experience uses ADB over a Wi-Fi connection. Unlike the existing TCP/IP debugging workflow, Wireless Debugging on Android 11 does not need a cable to set up, remembers connections over time, and can utilize the full speed of the latest Wi-Fi standards. An integrated experience for wireless debugging with QR code scanning is coming in a future Android Studio release.
  • Data access auditing updates: Instrument your app to better understand how it accesses user data and from which user flows. In DP3, Google renamed several of these APIs.

Preview/Beta schedule

After you’ve flashed Android 11 onto your device or fired up the Android Emulator, you’ll want to update your Android Studio environment with the Android 11 Preview SDK (set up guide). Then install your current production app and test the user flows. For a complete rundown on what’s new, check the API overview, API reference, and behavior changes. To help developers test, Google made many of the targetSdk changes toggleable, so you can force-enable or disable them individually from Developer options or ADB. The greylists of restricted non-SDK interfaces can also be enabled/disabled.

The goal of the developer previews is to let developers explore new features and APIs for apps early, test for compatibility, and give feedback. Normally, more details would be shared during Google’s developer conference in May, but given that event has been canceled, Google will have to adjust. Either way, expect more new features and capabilities in the first beta.

Android 11 beta timeline

Last year, there were six betas. This year, there will be three developer previews and three betas. Here’s the preview/beta schedule for Android 11:

  • February: Developer Preview 1 (Early baseline build focused on developer feedback, with new features, APIs, and behavior changes.)
  • March: Developer Preview 2 (Incremental update with additional features, APIs, and behavior changes.)
  • April: Developer Preview 3 (Incremental update for stability and performance.)
  • May: Beta 1 (Initial beta-quality release, over-the-air update to early adopters who enroll in Android Beta.)
  • June: Beta 2 (Platform Stability milestone. Final APIs and behaviors. Play publishing opens.)
  • Q3: Beta 3 (Release candidate build.)
  • Q3: Final release (Android 11 release to AOSP and ecosystem.)

Google is asking developers to make their apps compatible with Android 11 so that their users can expect a seamless transition when they upgrade. “When we reach Platform Stability, system behaviors, non-SDK greylists, and APIs are finalized,” Google VP of engineering Dave Burke wrote today. “At that time, plan on doing your final compatibility testing and releasing your fully compatible app, SDK, or library as soon as possible so that it is ready for the final Android 11 release.”

Peak.AI raises $12 million to bolster enterprise AI adoption

Peak.ai
Image Credit: Peak.ai

Peak.AI, a startup developing AI solutions for enterprise customers, today announced that it has raised $12 million in extended series A funding. The fresh capital will fuel Peak’s growth, commercial expansion, and R&D, according to CEO Richard Potter, and will come as up to 25% of companies report experiencing a 50% failure rate in deploying AI models.

Despite the promise of AI, the corporate sector’s adoption curve hasn’t been as steep as some had predicted. A survey of publicly traded U.S. retailers’ earnings calls found that only nine of about 50 companies had started to discuss an AI strategy, and a separate study — from Genesys — found that 68% of workers aren’t yet using tools that leverage AI.

Peak aims to simplify implementation with a subscription-based software-as-a-service offering that spans infrastructure, data processing, workflow, and applications. Its customers — brands like Pepsi and Marshalls — supply their data, which Peak’s platform ingests through built-in connectors to accomplish things like optimizing supply and demand and supporting fulfillment processes, courtesy of a library of configurable AI engines.

Once AI engines go live, their predictive and prescriptive outputs can be exposed through APIs or explored, visualized, and exported with Peak’s Data Studio. The platform can handle data sets of virtually any size running on Amazon Web Services, and it serves models in an always-on fashion so that they self-improve over time. It also screens all ingested data through an algorithm to identify and anonymize any personally identifiable information.

Peak’s team optionally works with customers to define objectives, quantify opportunities using a sample of data, and scope out a business case for sign-off and launch. It’ll take care of kick-off and onboarding, as well as operationalizing, and it’ll configure the solutions to individual user needs.

There’s no shortage of fully managed AI solutions with substantial venture backing. H2O recently raised $72.5 million to further develop its platform that runs on bare metal or atop existing clusters and supports a range of statistical models and algorithms. And Cnvrg.io — which recently launched a free community tier — has raised $8 million to date for its end-to-end AI model tracking and monitoring suite.

But Peak claims its platform is more performant than rival offerings. It says it has helped customers achieve a 28% uplift in marketing revenues, a 4 times increase in return on capital employed, and a 147-ton reduction in CO2 emissions through optimized logistics and resource planning.

MMC Ventures and Praetura Ventures led the series A round, which brings Manchester-based Peak’s total funding to $18 million. The company was founded in December 2014 by CEO Richard Potter, David Leitch, and Atul Sharma and has additional offices in Jaipur and Edinburgh.

FreeWire raises $25 million to bring ultrafast charging to electric vehicles

As the transportation industry slowly moves toward widespread adoption of electric vehicles, appropriate infrastructure — like charging stations in place of gas stations — will be essential. That’s why California-based startup FreeWire Technologies, which builds charging products for electric vehicles (EVs), has secured $25 million in a series B round of equity and debt funding led by BP Ventures.

While EVs are still a small percentage of overall car sales — around 2% of new car sales last year in the U.S. — electrification seems to be trending upward in many parts of the world. In Norway, for example, 56% of all new car sales last year were either plug-in or plug-in hybrids. Elsewhere, Volvo has committed to only making vehicles with electric motors, while Ford recently lifted the lid on its first long-range electric car. Throw into the mix the fact that all autonomous vehicles will be electric, and it seems clear electric vehicles will become the norm at some point in the future — though how long that transition will take remains to be seen.

Founded out of San Leandro in 2014, FreeWire offers a mobile EV charging station that doesn’t rely on traditional infrastructure. Traditional EV charging stations usually involve costly installations, whereas FreeWire’s can be charged from a normal wall outlet and then deployed anywhere to power multiple vehicles in a day. This could be useful for businesses that run small fleets delivering parcels, for example. Separately, FreeWire also offers an electric mobile generator that can be used to power just about anything in a remote location.

Last year, FreeWire launched a new ultrafast EV charger called Boost that can be deployed with existing infrastructure to provide 160kWh of battery capacity and 120Kw output. This equates to an up to 480-mile range on a one-hour charge. But FreeWire says the main selling point compared to other fast chargers is that it promises a 40% lower installation cost. Moreover, FreeWire said the Boost Charger alleviates excess strain on the electrical grid by powering itself during off-peak hours at a lower voltage.

Above: FreeWire Boost Charger

FreeWire had previously raised around $30 million, including its $15 million series A back in 2018, and with its latest cash injection the company said it will double down on efforts to get its Boost Charger to market in Q2 2020.

Numerous players are working on EV charging infrastructure, including San Francisco-based Volta, which closed a $20 million funding tranche last year, and ChargePoint, which has raised north of $500 million. BP’s decision to invest more money in FreeWire after having led its previous funding round fits into a broader trend of traditional “fossil fuel” companies future-proofing their businesses by exploring “green” alternatives. Last year, for example, oil and gas giant Shell acquired LA-based Greenlots, which produces EV charging stations and software.

David Gilmour, global managing director of BP Ventures, said this latest investment fits into the firm’s strategy of “diversifying our energy portfolio” to meet anticipated future demand.

Researchers say deep learning will power 5G and 6G 'cognitive radios'

Image Credit: supparsorn/Getty

For decades, amateur two-way radio operators have communicated across entire continents by choosing the right radio frequency at the right time of day, a luxury made possible by having relatively few users and devices sharing the airwaves. But as cellular radios multiply in both phones and Internet of Things devices, finding interference-free frequencies is becoming more difficult, so researchers are planning to use deep learning to create cognitive radios that instantly adjust their radio frequencies to achieve optimal performance.

As explained by researchers with Northeastern University’s Institute for the Wireless Internet of Things, the increasing varieties and densities of cellular IoT devices are creating new challenges for wireless network optimization; a given swath of radio frequencies may be shared by a hundred small radios designed to operate in the same general area, each with individual signaling characteristics and variations in adjusting to changed conditions. The sheer number of devices reduces the efficacy of fixed mathematical models when predicting what spectrum fragments may be free at a given split second.

That’s where deep learning comes in. The researchers hope to use machine learning techniques embedded within the wireless devices’ hardware to improve frequency utilization, such that the devices can develop AI-optimized spectrum usage strategies by themselves. Early studies suggest that deep learning models average 20% higher classification accuracy than traditional systems when dealing with noisy radio channels, and will be able to scale to hundreds of simultaneous devices, rather than dozens. Moreover, the deep learning architecture developed for this purpose will be usable for multiple other tasks, as well.

One key challenge in implementing deep learning for this application is the massive amount of data that will need to be processed rapidly to do continuous analysis. Deep learning can rely on tens of millions of parameters, and here might require measurements of over a hundred megabytes per second of data on a millisecond level. This is beyond the capability of “even the most powerful embedded devices currently available,” the researchers note, and low latency demands that the results not be processed in the cloud.

So the goal will be to help shrink deep learning models to the point where they can run on small devices, and use complex testing facilities — “wireless data factories” — to improve the software as hardware improves, including raising its resilience against adversarial attacks. The researchers expect to use the learning in both 5G millimeter wave and future 6G terahertz hardware, which are expected to become even more ubiquitous than 4G devices over the next two decades, despite their ultra high-frequency signals’ susceptibility to physical interference.

Magic Leap announces layoffs and enterprise refocus to ensure second AR headset

Image Credit: Magic Leap

The Magic Leap 1 augmented reality headset hasn’t exactly been the world-changing success its developers intended. Today, the company announced that it’s making the difficult choice to lay off employees, cut costs, and refocus more heavily on enterprise customers so its next-generation model Magic Leap 2 will actually reach the market.

While the company hasn’t disclosed the scope of either the layoffs or the enterprise pivot, they’re not entirely surprising. Despite funding from major tech companies such as Google and the support of top U.S. wireless carrier AT&T, Magic Leap has struggled to win broad adoption from consumers or developers, in large part because of its $2,300 to $3,000 pricing. After focusing its initial “Creator Edition” model on developers, it relaunched the headset last December and focused on promoting enterprise applications. In recent months, the company reportedly hoped to be acquired for $10 billion, which seemed highly unlikely even before coronavirus outbreaks began across the world. Today’s announcement blamed the outbreaks for decreasing access to investment capital, forcing the internal changes.

Magic Leap’s announcement comes at a pivotal point for the mixed reality industry. Increasing demand for virtual reality headsets to facilitate work-from-home meetings and in-person online collaboration hasn’t yet been matched by consumer uptake for augmented reality glasses like Magic Leap’s. However, some AR headset makers are seeing major upticks in orders for glasses that can be used as tools for coronavirus screening or cooperative medical care. Generally, both VR and AR appear to have bright futures, but creating apps for Magic Leap’s standalone platform requires additional development work compared with Windows- and Android-based rivals.

For the time being, Magic Leap says that it’s working to negotiate revenue-generating enterprise deals for its platform, and changing its operations and spending to “ensure delivery of Magic Leap 2.” The company has openly discussed its second-generation headset since the launch of the original model, suggesting that physical prototypes already existed under cloths in its CEO’s office, but hasn’t provided specifics as to the new model’s features and changes. However, AT&T partnered with the company to provide 5G know-how and network infrastructure, implying that the headset might go from using Wi-Fi and a large puck-shaped wearable processing unit to a partially or substantially cellular design.

Among those laid off was Graeme Devine, who previously co-founded Trilobyte and created popular computer games The 7th Guest and The 11th Hour. He also worked on id Software’s Quake III Arena, and had a games-focused role at Apple. At Magic Leap, Devine served as chief creative officer and senior vice president of games, apps, and creative experiences. Devine indicated in social media posts that Magic Leap had decided to pivot to hardware, which could have broader implications for its software platform.

We’ve reached out to Magic Leap for additional details on the scope of the layoffs, and will update this article with those details if and when we know more. Bloomberg reports that “people familiar with the matter” have confirmed that around 1,000 people will be affected, or roughly half the company’s workforce, resulting in the winding down of Magic Leap’s consumer business.

Google says 500,000 developers use Flutter monthly, outlines release process and versioning changes

2 million developers have used Flutter to date

Above: 2 million developers have used Flutter to date

Google today revealed that “nearly half a million developers” now use its open source UI framework Flutter each month. And 2 million developers have used Flutter since version 1.0 was released in December 2018. This is the first time the company has shared user milestones for the SDK. Adoption isn’t slowing down: Flutter saw 10% month-over-month growth in March. And out of the 50,000 Flutter apps on Google Play, nearly 10,000 were uploaded in the past month.

Meant to compete with frameworks like Facebook’s React Native, Flutter began its life as an open source mobile UI framework that helps developers build native interfaces for Android and iOS. Since May, however, Flutter has let developers build desktop, embedded, mobile, and web apps from the same codebase. Developers can use Flutter on phones, wearables, tablets, desktops, laptops, televisions, and smart displays. Google calls this ambient computing — the idea that your services and software are available wherever you need them. Google wants developers to start app development not by asking “Which device am I targeting?” but “What am I going to build?” Reusing code should help startups limited by resources and let enterprises consolidate teams into shipping a single experience.

Flutter numbers

This is exactly why developers in all sorts of environments, from individuals all the way up to team leads at major corporations, need to pay attention to Flutter updates. Google also broke down the share of Flutter developers: 35% work for a startup, 26% are enterprise developers, 19% are self-employed, and 7% work for design agencies. The company added that “Flutter usage is growing fast among enterprise customers in particular” and that large companies specifically appreciate the ability to build “highly branded experiences that support multiple platforms.” Google today pointed enterprises to SyncFusion Essential Studio and its high-quality Flutter components, including charting, PDF manipulation, and barcode generation.

Google also shared more statistics about Flutter developers today:

Release process and versioning changes

Ahead of Flutter’s next stable release, Google is changing its release model in hopes of improving stability and predictability. While the current process served Flutter well when it was run by a smaller team, developers have lately complained about a lack of clarity on when the release would be built, what code was in it, and poor testing for branches, which caused regressions in hotfix releases.

Google is thus adopting a branching model with a stabilization period for beta and stable releases. The team will now branch at the beginning of the month for a beta release. Roughly once a quarter, the current beta branch will be promoted to stable. Google’s infrastructure now supports testing against branches, meaning it can validate cherry-picks of critical fixes and requests. The company hopes this will “provide higher confidence in the quality and predictability of our releases, and an easier way to deliver hotfixes to the stable channel.” The branching model also brings minor changes to the way releases are versioned, which you can read about on GitHub.

Google has also aligned the Flutter and Dart release processes and channels. (Flutter apps are built using Google’s Dart programming language.) Dart now has a beta channel, and future releases will be synced (for example, Flutter beta releases will contain a Dart beta release). Developers that already ship a Flutter app based on the stable channel should test it against beta candidate releases.

Google’s first Flutter release using this new versioning model will be its next stable release. It ships next week.

Granulate raises $12 million to optimize server performance with AI

Above: HP servers.

Image Credit: HP

Granulate, a startup developing a platform that optimizes computing infrastructure in real time, today announced that it raised $12 million, bringing its total raised to date to $15.6 million. The company’s products could reduce the time engineers spend fine-tuning the performance of enterprise systems, freeing them up to pursue more creative and impactful projects.

Granulate’s eponymous product comprises agents that can be installed on any Linux server in data centers or cloud environments, including virtual machines. These agents, which are underpinned by AI, adapt both to operating systems and kernels, prioritizing threads while taking into account each request’s processing stage and employing a network stack that enables high parallelism. Granulate analyzes memory usage patterns and sizes to optimize allocations and release of memory for each app, and it autonomously crafts congestion control prioritizations between connections, optimizing throughput for the current workload and network status.

“Most companies run at 35% IT infrastructure utilization or less due to strict quality of service and stability needs. Granulate solves the trade-off between quality of service and costs, providing customers improved results in both,” said Granulate cofounder and CEO Asaf Ezra, who previously did a stint at cyber research and security firm KayHut after serving four years in the Israeli Defense Forces.

Applying AI to data center operations isn’t a new idea — IDC predicts that 50% of IT assets in data centers will run autonomously using embedded AI functionality by 2022. To this end, Concertio, which recently raised $4.2 million, provides AI-powered system optimization tools for boosting hardware and software performance. Facebook has said that it employs AI internally to speed up searches for optimal server settings. And IBM offers a tool — Data Center Advisor with Watson — that calculates data center health scores and predicts potential issues or failures.

Granulate

Above: Granulate’s analytics dashboard.

Image Credit: Granulate

But according to Ezra, Granulate’s suite — which works with existing monitoring tools like Prometheus, AppDynamics, New Relic, Datadog, Dynatrace, and Elastic — is installed in dozens of production environments and tens of thousands of servers, and it’s more performant than most. It improves the throughput of machines by up to five times on average, the company claims, leading to up to a 60% compute cost savings and a 40% reduction in latency.

Startapp, a mobile data platform with over 1.5 billion monthly active users and 800,000 partner apps, reports that Granulate achieved a 30% reduction in average latency and a 25% processor utilization reduction, netting a 33% compute cost reduction. Another customer — Bigabid, an advertising technology company specializing in mobile user acquisition and re-engagement for gaming, dating, and productivity apps — says it managed to reduce compute costs by 60% within 15 minutes of deploying Granulate.

“Given the current economic slowdown, we are even more excited about helping businesses across the globe achieve dramatic cost reductions necessary to thrive amid changes in the global business environment,” Ezra added.

Granulate’s financing round (a series A) was led by Insight Partners, with participation from TLV Partners and Hetz Ventures. It brings the company’s total raised to $15.6 million shortly after its graduation from the 16-week Intel Ignite accelerator program. Ezra, who cofounded Granulate with Tal Saiag in 2018, says the capital will support the Tel Aviv-based company’s growth and expansion as it triples the size of its R&D, sales, and marketing departments. (Granulate has 14 employees, and it expects to triple its headcount to roughly 40 by 2021.)

Flytrex launches drone delivery service in Grand Forks, North Dakota

Flytrex today announced the launch of a drone delivery service in Grand Forks, North Dakota. In collaboration with drones-as-a-service company EASE Drones, the Grand Forks Region Economic Development Corporation, the Northern Plans Unmanned Aerial System Test Site, and the City of Grand Forks, the startup will deliver food, medicine, and other goods from restaurant and retailer partners via drone to select households.

The expansion comes as the coronavirus pandemic motivates shelter-in-place orders around the world, forcing a number of businesses — particularly restaurants — to offer only delivery or pickup options. It’s been suggested that drones and autonomous vehicles can limit unnecessary contact between workers, couriers, and customers, helping to prevent the spread of infection.

Flytex says that in Grand Forks, its drones will take off across the street from a local supercenter from which customers can purchase items. During the initial stages of the pilot program, deliveries will be offered to households that have opted into the service, and they’ll be made directly to backyards.

Flytrex’s latest deployment comes after the debut of the company’s an on-demand drone delivery service in Reykjavik, Iceland. In 2018, Flytrex established its first partnership with EASE Drones, launching drone delivery at King’s Walk Golf Course in Grand Forks. Soon after, in 2019, Flytrex was selected by the Federal Aviation Administration (FAA) to participate in its UAS Integration Pilot Program (IPP) with the North Carolina Department of Transportation.

IPP, which began in 2017, aims to bring together state, local, and tribal governments together with private sector entities, such as drone operators and manufacturers, to test and evaluate the integration of civil and public drone operations into the national airspace system. Other participants include the City of Reno, the University of Alaska-Fairbanks, the Memphis Shelby County Airport Authority, the Kansas Department of Transportation, and Herndon’s Innovation and Entrepreneurship Investment Authority.

Other drone startups are testing during the pandemic in North Carolina, like Zipline, which in Charlotte will deliver personal protective equipment (such as masks) around the campuses of the Novant Health medical network. Matternet, which in partnership with UPS will make medical deliveries between WakeMed hospital in Raleigh and its Healthplex in Garner. (Under the FAA’s Part 135 Standard certification, UPS’ Flight Forward subsidiary can fly drones beyond pilots’ lines of sights and make commercial deliveries.)

North Carolina Department of Transportation officials say they’ll use the data from the Zipline, Matternet, and Flytrex programs to learn how drone technology can be used in other areas of the country. Funding for the individual drone missions will come from private partners.

Elsewhere, Zipline’s drones are flying COVID-19 test samples from rural areas of Ghana to the nation’s capital. Alphabet’s Wing drone delivery business, which has deployments in Virginia, Finland, and Australia, continues to make deliveries to customers. Not to be outdone, DJI through its Disaster Recovery Program is conducting remote outreach to homeless populations in Tulsa and helping to maintain social distancing guidelines in Daytona Beach. And medical delivery drones supplied by Antwork and others were used in China to fly quarantine supplies and medical samples.

MIT aims for energy efficiency in AI model training

Image Credit: Novelo/Shutterstock

In a newly published paper, MIT researchers propose a system for training and running AI models in a way that’s more environmentally friendly than previous approaches. They claim it can cut down on the pounds of carbon emissions involved to “low triple digits” in some cases, mainly by improving the computational efficiency of the models.

Impressive feats have been achieved with AI across domains like image synthesis, protein modeling, and autonomous driving, but the technology’s sustainability issues remain largely unresolved. Last June, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide — equivalent to nearly 5 times the lifetime emissions of the average U.S. car.

The researchers’ solution, a “once-for-all” network, trains a large model comprising many pretrained sub-models of different sizes that can be tailored to a range of platforms without retraining. Each sub-model can operate independently at inference time without retraining, and the system identifies the best sub-model based on the accuracy and latency trade-offs that correlate to the target hardware’s power and speed limits. (For instance, for smartphones the system will select larger subnetworks, but with different structures depending on individual battery lifetimes and computation resources.)

A “progressive shrinking” algorithm efficiently trains the large model to support all of the sub-models simultaneously. The large model is trained first, and then smaller sub-models are trained with the help of the large model so that they learn concurrently. In the end, all of the sub-models are supported, allowing speedy specialization based on the target platform’s specifications.

In experiments, the researchers found that training a computer vision model containing over 10 quintillion architectural settings with their approach ended up being far more efficient than spending hours training each sub-network. Furthermore, it didn’t compromise the model’s accuracy or efficiency — the model achieved state-of-the-art accuracy on mobile devices when tested against a common benchmark (ImageNet) and was 1.5 to 2.6 times faster in terms of inference than leading classification systems.

Perhaps more impressive, the researchers claim that the computer vision model required roughly 1/1,300 the carbon emissions while training compared with today’s popular model search techniques. “If rapid progress in AI is to continue, we need to reduce its environmental impact,” said IBM fellow and member of the MIT-IBM Watson AI Lab John Cohn, referring to the study. “The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better.”

Available on GitHub are the code and pretrained models for devices like the Samsung Galaxy Note8, Samsung Galaxy Note10, Samsung Galaxy S7 Edge, LG G8, Google Pixel, and Pixel 2. Both are also available for processors like Intel Xeon and graphics cards like Nvidia’s GTX 1080Ti, Jetson TX2, and V100.

It’s worth noting that MIT’s work builds on approaches like that outlined in a 2017 paper titled “Efficient Processing of Deep Neural Networks: A Tutorial and Survey.” The research laid out some of the ways to reduce the computational demands of machine learning models, including changes to hardware design, collaboration on hardware design, and the algorithms themselves. Other proposals have called for an industry-level energy analysis and a compute-per-watt standard for machine learning projects.

DeepMind's AI studies game players to exploit weaknesses in their strategies

DeepMind
Image Credit: DeepMind

In a paper published on the preprint server Arxiv.org, scientists at Alphabet’s DeepMind propose a new framework that learns an approximate best response to players within games of many kinds. They claim that it achieves consistently high performance against “worst-case opponents” — that is, players who aren’t good, yet at least play by the rules and actually complete the game — in a number of games including chess, Go, and Texas Hold’em.

DeepMind CEO Demis Hassabis often asserts that games are a convenient proving ground to develop algorithms that can be translated into the real world to work on challenging problems. Innovations like this new framework, then, could lay the groundwork for artificial general intelligence (AGI), which is the holy grail of AI — a decision-making AI system that automatically completes not only mundane, repetitive enterprise tasks like data entry, but which reasons about its environment. That’s the long-term goal of other research institutions, like OpenAI.

The level of performance against players is known as exploitability. Computing that exploitability is often computationally intensive because the number of actions players might take is so large. For example, one variant of Texas Hold’em — Heads-Up Limit Texas Hold’em — has roughly 1014 decision points, while Go has approximately 10170. One way to get around this is with a policy that can exploit a player to be evaluated, using reinforcement learning — an AI training technique that spurs software agents to complete goals via a system rewards — to compute the best response.

The framework the DeepMind researchers propose, which they call Approximate Best Response Information State Monte Carlo Tree Search (ABR IS-MCTS), approximates an exact best response on an information-state basis. Actors within the framework follow an algorithm to play a game while a learner derives information from various game outcomes to train a policy. Intuitively, ABR IS-MCTS tries to learn a strategy that, when the exploiter is given unlimited access to the strategy of the opponent, can create a valid and exploiting counterstrategy; it simulates what would happen if someone trained for years to exploit the opponent.

The researchers report that in experiments involving 200 actors (trained on a PC with 4 processors and 8GB of RAM) and a learner (10 processors and 20GB of RAM), ABR IS-MCTS achieved a win rate above 50% in every game it played and a rate above 70% in games other than Hex or Go (like Connect Four and Breakthrough). In backgammon, it won 80% of the time after training for 1 million episodes.

DeepMind (ABR) IT-MCTS

The coauthors say they see evidence of “substantial learning” in that when the actors’ learning steps are restricted, they tend to perform worse even after 100,000 episodes of training. They also note, however, that ABR IS-MCTS is quite slow in certain contexts, taking on average 150 seconds to calculate the exploitability of a particular kind of strategy (UniformRandom) in Kuhn poker, a simplified form of two-player poker.

Future work will involve extending the method to even more complex games.

Robots, AI, and the road to a fully autonomous construction industry

Built Robotics excavator and dozer on a construction site

Above: Built Robotics excavator and dozer on a construction site

Image Credit: Built Robotics

Built Robotics executives are fond of saying that their autonomous system for construction equipment, like dozers and excavators, might be further along than many autonomous vehicles. In fact, CEO Noah Ready-Campbell insists you’ll see autonomous vehicles in controlled industrial environments — like construction sites — before you see level 5 driverless cars on public roads. That may be in part because autonomous construction equipment often operates on privately owned land, while public roads face increased regulatory scrutiny.

“There’s a quote that ‘Cold fusion is 20 years in the future and always will be,'” Ready-Campbell told VentureBeat. “I think there’s a chance that that might be true for level 5 self-driving cars as well.”

That might have seemed like an absurd thing to say back when autonomous driving first entered the collective imagination and companies established their intention to solve AI’s grand autonomous vehicle challenge. But Waymo now takes billions from outside investors, and the delay of major initiatives like GM’s Cruise and taxi service and Ford’s autonomous driving program call into question the progress automakers have made on autonomous vehicles.

One thing Ready-Campbell credits autonomous vehicle companies with is generating excitement around AI for use in environments beyond public roads, like on construction sites.

“We were the beneficiaries of that when we did our series B last year,” he said. “I definitely think construction benefited from that.”

From computer vision systems and drones to robots walking and roving through construction projects, Built Robotics and a smattering of other companies are working in unstructured industrial environments like mining, agriculture, and construction to make autonomous systems that can build, manage, and predict outcomes.

To take a closer look at innovation in the field, the challenges ahead, and what it’s going to take to create fully autonomous construction projects in the future, VentureBeat spoke with startups that are already automating parts of their construction work.

Autonomous excavators and heavy machinery

Built Robotics creates control systems for existing construction equipment and is heavily focused on digging, moving, and placing dirt. The company doesn’t make its own heavy construction equipment; its solution is instead a box of tech mounted inside heavy equipment made by companies like Caterpillar, Komatsu, and Hyundai.

Built Robotics VP of strategy Gaurav Kikani told VentureBeat that the company started with autonomous skid steers — the little dozers that scoop up and transport sand or gravel on construction sites. Today, Built Robotics has autonomous systems for bulldozers and 40-ton excavators.

“We have a software platform that actuates the equipment that takes all the data … being read by the sensors on the machine every second and then makes decisions and actuates the equipment accordingly,” Kikani said.

Built Robotics focuses on earthmoving projects at remote job sites in California, Montana, Colorado, and Missouri — far removed from human construction workers. Autonomous heavy equipment monitored by a human overseer tills the earth in preparation for later stages of construction, when human crews arrive to do things like build homes or begin wind or solar energy projects. In the future, the startup, which raised $33 million last fall, wants to help with more infrastructure projects.

Kikani and Built Robotics CEO Ready-Campbell say the company is currently focused on projects where there’s a lot of dirt to move but not a lot of qualified operators of heavy machinery.

Calling to mind John Henry versus the machine, Kikani said human operators can go faster than a Built-controlled excavator, for example, but machine automation is meant to provide consistency and maintain a reliable pace to ensure projects finish on schedule.

Built Robotics combines lidar with cameras for perception and to recognize humans or potential obstacles. Geofencing keeps machinery from straying outside the footprint of a construction site. Excavators and dozers can work together, with dozers pushing material away or creating space for the excavator to be more productive.

“The fleet coordination element here is going to be critical. In Built [Robotic]’s early days, we really focused on standalone activities, where you have one piece of equipment just on its own taking care of the scope. But realistically, to get into the heart of construction, I think we’re going to start to coordinate with other types of equipment,” Kikani said. “So you might have excavators loading trucks [and] autonomous haulage routes where you have fleets of trucks that are all kind of tracking along the same route talking to each other, alerting each other to what they see along the route if conditions are changing.”

“I think the trickiest thing about construction is how dynamic the environment is, building technology that is pliable or versatile enough to account for those changing conditions and being able to update in real time to plan to accommodate for that. I think that is really going to be the key here,” he said.

Equipment operated by systems from companies like Built Robotics will also need computer vision to recognize utility lines, human remains, or anomalies like archeological or historically important artifacts. It’s not an everyday occurrence, but construction activity in any locale can unearth artifacts that lead to work stoppage.

Drones, robots, and computer vision

Drones that can deploy automatically from a box are being developed for a variety of applications, from fire safety to security to power line inspection. Drones hovering above a construction site can track project progress and could eventually play a role in orchestrating the movement of people, robotic equipment, and heavy machinery.

In a nod to natural systems, San Francisco-based Sunflower Labs calls its drones “bees,” its motion and vibration sensors “sunflowers,” and the box its drones emerge from a “hive.”

Sensors around a protected property detect motion or vibrations and trigger the drones to leave their base station and record photos and video. Computer vision systems working with sensors on the ground guide the drone to look for Intruders or investigate other activity. Autonomous flight systems are fixed with sensors on all four sides to influence where the drone flies.

Sunflower Labs CEO Alex Pachikov said his company’s initial focus is on the sale of drones-in-a-box for automated security at expensive private homes. The company is also seeing a growing interest from farmers of high-value crops, like marijuana.

Multiple Sunflower Labs drones can also coordinate to provide security for a collection of vacation homes, acting as a kind of automated neighborhood watch that responds to disturbances during the months of the year when the homes attract few visitors.

Stanley Black and Decker, one of the largest security equipment providers in the United States, became a strategic investor in Sunflower Labs in 2017 and then started exploring how drones can support construction project security and computer vision services. Pachikov said Sunflower’s security is not intended to replace all other forms of security, but to add another layer.

The company’s system of bees, hives, and sunflowers is an easy fit for construction sites, where theft and trespassing at odd hours can be an issue, but the tools can do a lot more than safeguard vacant sites.

When a Sunflower Labs drone buzzes above a construction site, it can deploy computer vision-enabled analytics tools for volumetric measurement to convert an image of a pile of gravel into a prediction of total on-site material.

Then tools from computer vision startups like Pics 4D, Stockpile Reports, and Drone Deploy can provide object detection, 3D renderings of properties for tracking construction progress, and other image analysis tools.

Companies like Delair take a combination of data from IoT sensors, drone footage, and stationary cameras from a construction project to create a 3D rendering that Delair calls a digital twin. The rendering is then used to track progress and identify anomalies like cracks or structural issues.

Major construction companies around the world are increasingly turning to technology to reduce construction project delays and accident costs. The 2019 KPMG global construction survey found that within the next five years, 60% of executives at major construction companies plan to use real-time models to predict risks and returns.

Indus.ai is one of a handful of companies making computer vision systems for tracking progress on construction sites.

“We can observe and use a segmentation algorithm to basically know every pixel — what material it is — and therefore we know the pace of your concrete work, your rebar work, your form work and [can] start predicting what’s happening,” Indus.ai CEO Matt Man told VentureBeat in a phone interview.

He envisions robotic arms being used on construction sites to accomplish a range of tasks, like creating materials or assembling prefabricated parts. Digitization of data with sensors in construction environments will enable various machine learning applications, including robotics and the management of environments with a mix of working humans and machines. 

For large projects, cameras can track the flow of trucks entering a site, the number of floors completed, and the overall pace of progress. Computer vision could also follow daily work product and help supervisors determine whether the work of individuals and teams follows procedure or best trade practices.

“Imagine a particular robotic arm can start putting drywall up, then start putting tiles up, all with one single robotic arm. And that’s where I see the future of robotics […] To be able to consolidate various trades together to simplify the process,” Man said. “There could be armies of robot-building things, but then there is an intelligent worker or supervisor who can manage five or 10 robotic arms at the same time.”

Man thinks software for directing on-site activity will become more critical as contractors embrace robotics, and he sees a huge opportunity for computer vision to advance productivity and safety in industrial spaces.

Stanford University engineers have explored the use of drones for construction site management, but such systems do not appear to be widely available today or capable of coordinating human and robotic activity.

“Having all these kinds of logistical things run together really well, it’s something I think AI can do. But it’s definitely going to take some time for the whole orchestration to be done well, for the right materials to get to the right place at the right time for the robot to pick it up and then to do the work or react if some of the material gets damaged,” Man said. “In the current construction methodology, it’s all about managing surprises, and there are millions of them happening over the course of the whole construction plan, so being able to effectively manage those exceptions is going to be a challenge.”

One construction robotics platform to rule them all

Boston Dynamics, known for years as the maker of cutting-edge robots, also entered construction sites last year as part of its transition from an R&D outfit to a commercial company.

Like Sunflower Labs’ drones, Boston Dynamics’ four-legged Spot with a robotic grasping arm acts as a sensor platform for 360-video surveys of construction projects. Capable of climbing stairs, opening doors, and regaining its balance, the robot can also be equipped with other sensors to track progress and perform services that rely on computer vision.

An event held by TechCrunch at the University of California, Berkeley last month was one of the first opportunities Bay Area roboticists have had to convene since the pandemic precipitated an impending recession. Investors focused on robotics for industrial or agricultural settings urged startups to raise money now if they could, to be careful about costs, and to continue progress toward demonstrating product-market fit.

Speaking on a panel that included Built Robotics CEO Ready-Campbell, startups debated whether there will be a dominant platform for construction robotics. Contrary to others on the panel, Boston Dynamics construction technologist Brian Ringley said he believes platforms will emerge to coordinate multiple machines on construction sites.

“I think long-term there will be enough people in the markets that there will be more competition, but ultimately it’s the same way we use lots of different people and lots of machines on sites now to do these things. I do believe there will be multiple morphologies on construction sites and it will be necessary to work together,” Ringley said.

Tessa Lau is cofounder and CEO of Dusty Robotics, a company that makes an automated building layout called FieldPrinter. She said there’s a huge opportunity for automation and human labor augmentation in an industry that currently has very little automation. Systems may emerge that are capable of doing the work of multiple trades or on-site activity management, but Lau said there can be nearly 80 different building trades involved in a construction site. Another problem: Construction sites are by definition in various stages of fairly constant change. The dynamic nature of construction sites — where there is no set or static state like you might find in a factory — presents another challenge.

“I think the flip side is if you look at a typical construction site, it’s chaos, and anyone with a robotics background who knows anything about robotics knows it’s really hard to make robots work in that kind of unstructured environment,” she said.

Forget the word “robot”

One thing the TechCrunch panelists agreed on is that robots on construction sites won’t succeed unless the people working alongside them want them to. To help ensure that happens, Lau suggested startups slap googly eyes on their robots because people want to see things that are cute or beloved succeed.

“Our customers are rightfully concerned that robots are going to take their jobs, and so we have to be careful about whether we are building a robot or … building a tool,” Lau said. “And, in fact, we call our product a FieldPrinter. It’s an appliance like a printer. It uses a lot of robotic technology — it uses sensors and path planning and AI and all the stuff that powers robotics today, but the branding and marketing is really around the functionality. Nobody wants to buy a robot; they want to solve a problem.”

Built Robotics CEO Ready-Campbell wholeheartedly agreed, arguing that even a thermostat can be considered a robot if the only requirement to meet that definition is that it’s a machine capable of manipulating its environment.

Last month, just before economic activity began to slow and shelter-in-place orders took effect, the International Union of Operating Engineers, which has over 400,000 members, established a multi-year training partnership with Built Robotics. Executives from Built Robotics say its systems operate primarily in rural areas that experience skilled labor shortages, but Ready-Campbell thinks it’s still a good idea to drop the term “robot” because it scares people. Opposition to construction robotics could also become an issue in areas that see high levels of unemployment.

“That’s how we position Built [Robotics] in the industry, because when people think of robots, it kind of triggers a bunch of scary thoughts. Some people think about The Terminator, some people think about losing jobs,” he said. “It’s an industry that really depends on using advanced machinery and advanced technology, and so we think that automation is just the next step in the automation of that industry.”

Randori raises $20 million to spot cyberattacks with AI

Randori
Image Credit: Randori

Cybersecurity startup Randori today announced that it secured $20 million in equity financing, bringing the startup’s total raised to $29.75 million. The infusion of capital comes after a year during which attacks on internet of things devices tripled, and during which the number of malicious payloads on the web hit 24.6 billion — up 14% from 2018.

Randori’s attack platform promises to safely launch attacks on organizations to help them understand how to prevent or mitigate the effects of data breaches and other compromises, in part by leveraging machine learning to asses the exploitability of vulnerabilities. It’s a novel approach in that it needs only a corporate email address to scan for threats, and thus ostensibly requires less setup and configuration than rival offerings.

Randori — whose name was inspired by Japanese martial arts, and whose customers include Carbon Black, Greenhill, RapidDeploy, and the Center for Strategic and International Studies — provides a suite that aims to automate the assessment and decision-making underpinning when, where, and how an attacker is most likely to strike. To this end, it provides context and information about findings, remediation steps, and exploitable systems to prioritize.

Randori’s Recon product enables teams to continuously scan for misconfigurations, blind spots, and process failures using a black-box approach. Starting with an email address, Recon automatically creates a baseline of an organization’s attack surface. An integrated model — the Target Temptation model — then spots the assets most likely to elicit action from an attacker, taking into account factors like known weaknesses, post-exploitation potential, and the cost of action by an attacker.

Randori

As for Randori’s Attack, which is designed to pair with and complement Recon, it tests defenses against attacks by mirroring adversaries, exposing gaps and critical problems in the process. It attempts to gain access to valuable data and assets, taking pains at each step to elucidate successful and unsuccessful actions mapped to MITRE ATT&CK, a freely accessible knowledge base of tactics based on real-world observations. At the conclusion of each attack, it reports metrics including the time taken to detect, contain, or expel the attack; the percent of attacks detected; the detection rate; and the sophistication required to reach the assets.

Randori tells VentureBeat that it uses AI classification models to prioritize the targets attackers are most interested in, as well as affiliation mechanisms that make up a confidence engine responsible for analyzing information gleaned from internet scans. The confidence engine generates a relative score of how likely an entity — whether an IP address, hostname, domain, certificate, network, or other entity — on the internet is to be associated with a provided domain. This information helps Randori to identify where one company’s assets end and another’s begin.

Randori — which has 21 employees and expects to have over 100 by 2022 — was cofounded in 2018 by CEO and former Carbon Black exec Brian Hazzard and CTO David Wolpoff, along with Evan Anderson, Eric McIntyre, and Ian Lee. Hazzard says the funding from this latest round (a series A), which was led by Harmony Venture Partners with participation from existing investors Accomplice, .406 Ventures, and Legion Capital, will be used to build out a team of red-team hackers and to develop attack techniques to integrate with the platform.

“Security teams are looking for ways to be more proactive. They want to anticipate, not just react, to threats. This requires understanding what’s possible from the attacker’s perspective and where your security program is likely to break down,” Hazzard said, adding that Randori has over 150 active users. “Our platform exposes how attackers think, act and conduct campaigns, bringing a continuous red team experience to the mass market. This funding accelerates that by enabling us to double our headcount over the next year.”

Randori isn’t without competitors in the cyberthreat detection and remediation space. Ironscales employs AI and machine learning to defeat organization-wide phishing attacks in real time, and France- and Boston-based Vade recently raised $79 million to further develop its filtering stack that protects against compromise, malware, and spam. There’s also Tessian, which uses machine learning for securing enterprise mail, and Valimail, which nabbed $45 million last year to thwart email phishing attacks. That’s not to mention ZeroFox — it novelly taps AI to surface threats of violence and identify deepfake videos, or videos that take a person in an existing image, audio recording, or video and replace them with someone else’s likeness using AI.

Boston Dynamics open-sources health care robotics toolkit for telemedicine, vitals inspection, and disinfection

Boston Dynamics' Spot robot deployed for telemedicine

Above: Boston Dynamics' Spot robot deployed for telemedicine

As a direct response to the coronavirus pandemic, Boston Dynamics today open-sourced its health care robotics toolkit on GitHub. The company hopes that existing Boston Dynamics customers and other mobile robot providers can use the toolkit, which includes documentation and CAD files of enclosures and mounts, to help health care workers and essential personnel and ultimately save lives. The mobile robot provider outlined four use cases for its toolkit: telemedicine (which it has already deployed), remote vitals inspection, disinfection, and delivery.

Boston Dynamics says that in early March hospitals started inquiring whether its robots could help minimize staff exposure to the novel coronavirus. (One hospital apparently shared that in a single week a sixth of its staff had contracted COVID-19.) The company spent weeks figuring out how its robot Spot, which is shipping to early adopters, can meet hospital requirements. The result is a four-legged robot that supports frontline staff responding to the pandemic “in ad-hoc environments, such as triage tents and parking lots.” In fact, a single Spot was deployed last week to Brigham and Women’s Hospital in Boston as a mobile telemedicine platform to help health care providers remotely triage patients. There, it has helped nursing staff minimize exposure to potentially contagious patients.

The world is currently experiencing a global shortage of critical personal protective equipment (PPE), opening the door to autonomous technologies like drones and robots. Essential services are desperate for technology that can limit human contact, moving personnel and visitors out of infection range. As other businesses reopen, and arguably long after the pandemic is over, company leaders will be hungry for the same.

Telemedicine

The telemedicine part was the lowest-hanging fruit, so that’s what Boston Dynamics pursued first at Brigham and Women’s Hospital. The Spot robot features an iPad and a two-way radio for video conferencing. Health care providers remotely direct the mobile robot through lines of patients waiting outside the hospital to answer questions and get initial temperature assessments. Doctors can speak with patients from afar, possibly even from their own homes.

Boston Dynamics' Spot robot telemedicine

This process normally requires up to five medical staff, Boston Dynamics says. A mobile robot lets hospitals reduce the total number at the scene and conserve the hospital’s PPE supply. Every Spot shift reduces at least one health care provider’s exposure to the disease.

Vitals inspection, disinfection, and delivery

Boston Dynamics has also prototyped using Spot for remote vitals inspection to triage sick patients, for disinfection, and for various deliveries. For remote vital inspection, the company still needs to figure out how to support collecting additional vital sign information, including remotely measuring body temperature, respiratory rate, pulse rate, and oxygen saturation. So far, Boston Dynamics has done the following:

We have been in dialogue with researchers who use thermal camera technology to measure body temperature and calculate respiratory rate. We’ve also applied externally developed logic to externally mounted RGB cameras to capture changes in blood vessel contraction to measure pulse rate. We are evaluating methods for measuring oxygen saturation.

Additionally, Boston Dynamics wants the robots to disinfect hospital rooms and themselves. The company has also made some progress here:

By attaching a UV-C light to the robot’s back, Spot could use the device to kill virus particles and disinfect surfaces in any unstructured space that needs support in decontamination — be it hospital tents or metro stations. We are still in the early stages of developing this solution but also see a number of existing mobile robotics providers who have implemented this technology specifically for hospitals.

We’ve left the most obvious use case for last. The robots can deliver food, medicine, masks, and other supplies to patients in isolation. To help, the company prototyped a 3D-printable tray for Spot. Again, this minimizes health worker exposure and PPE usage.

None of these services requires Boston Dynamics’ hardware or software, the company emphasized. “In many instances, we imagine wheeled or tracked robots may be a better solution for these applications,” the company said. That’s why it’s releasing its toolkit to the world.

Paige raises $5 million more from Goldman Sachs to detect cancer with computer vision

Paige has raised $5 million to use computer vision for cancer detection.

Above: Paige has raised $5 million to use computer vision for cancer detection.

Image Credit: Paige

Health care startup Paige has raised an additional $5 million to help diagnose cancer using computer vision trained with clinical imaging data. The idea is to use data sets related to treatment and genomics to train the company’s deep learning networks to detect breast, prostate, and other major cancers.

New York-based Paige has raised over $75 million to date. The money came from Goldman Sachs Merchant Banking Division, and it means the company has now raised $50 million for its series B round, which was originally disclosed in December.

Paige will use the new capital to develop the company’s diagnostic and test products for the biopharma industry while strengthening its position in clinical AI for pathologists. It is also developing the Paige platform for remote viewing and routine clinical practice.

The company said it has added David Castelblanco, managing director at Goldman Sachs, to its board. Paige CEO Leo Grady said he was thrilled to be working with Castelblanco and Goldman Sachs as the company further develops its computational pathology infrastructure.

The company also has a partnership with Invicro, a Konica Minolta company, to provide integrated pathology solutions to support pharmaceutical and biotechnology sponsors with their drug discovery and development initiatives.

StereoSet measures racism, sexism, and other forms of bias in AI language models

Image Credit: raindrop74 / Shutterstock

AI researchers from MIT, Intel, and Canadian AI initiative CIFAR have found high levels of stereotypical bias from some of the most popular pretrained models like Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. The analysis was performed as part of the launch of StereoSet, a data set, challenge, leaderboard, and set of metrics for evaluating racism, sexism, and stereotypes related to religion and profession in pretrained language models.

The authors believe their work is the first large-scale study to show stereotypes in pretrained language models beyond gender bias. BERT is generally known as one of the top performing language models in recent years, while GPT-2, RoBERTa, and XLNet each claimed top spots on the GLUE leaderboard last year. Half of the GLUE leaderboard top 10 today including RoBERTa are variations of BERT.

The team evaluated pretrained language models based on both language modeling ability and stereotypical bias. A small version of OpenAI’s GPT-2 tops the StereoSet leaderboard in early testing. Several examples of how each model performs in each area of bias can be found on the StereoSet website.

Above: Examples of StereoSet intersentence pairings for bias analysis from an ensemble of language models evaluated in the work

Image Credit: StereoSet

The StereoSet data set comes with about 17,000 test instances for carrying out Context Association Tests (CAT) that measure a language model’s ability and bias. An idealized CAT score, or ICAT, combines language model performance and stereotype scores.

“We show that current pretrained language model exhibit strong stereotypical biases, and that the best model is 27 ICAT points behind the idealistic language model,” the paper reads. “We find that the GPT-2 family of models exhibit relatively more idealistic behavior than other pretrained models like BERT, RoBERTa, and XLNet.”

Above: Examples of StereoSet intrasentence pairings for bias analysis from an ensemble of language models considered in the work

Image Credit: StereoSet

Contributors to the work published on preprint repository arXiv include MIT student Moin Nadeem, Intel AI for Social Good lead Anna Bethke, and McGill University associate professor and CIFAR Facebook AI chair Siva Reddy. Pretrained models are known for capturing stereotypical bias because they’re trained on large data sets of real-world data.

Researchers believe GPT-2 may strike a balance better than other models because it’s built using data from Reddit. Another challenge introduced last week for training language models to better give advice to humans also relies on subreddit communities for training language models.

“Since Reddit has several subreddits related to target terms in StereoSet (e.g., relationships, religion), GPT2 is likely to be exposed to correct contextual associations,” the paper reads. “Also, since Reddit is moderated in these niche subreddits (ie./r/feminism), it could be the case that both stereotypical and anti-stereotypical associations are learned.”

Researchers were surprised to find no correlation between the size of the data set used to train a model and its ideal CAT score.

“As the language model becomes stronger, so its stereotypical bias (SS) too. This is unfortunate and perhaps unavoidable as long as we rely on real world distribution of corpora to train language models since these corpora are likely to reflect stereotypes (unless carefully selected),” the paper reads. “This could be due to the difference in architectures and the type of corpora these models are trained on.”

To examine bias, StereoSet runs models through sentence-specific or intrasentence fill-in-the-blank tests, as well as dialog or intersentence tests. In both instances, models are asked to choose between three associative context words related to a subject.

The StereoSet data set of associated terms and stereotypes were assembled by paid Mechanical Turk employees in the United States. Researchers say this approach may come with limitations because the majority of Mechanical Turk employees are under the age of 50.

In the past, a number of NLP researchers have used analysis methods like word embeddings, which can reveal a model’s preference, for example, that a doctor is a man and a nurse is a woman. The CAT test was inspired by previous work in bias evaluation, particularly the word embedding association test (WEAT).

In similar but unrelated work, last week a group of researchers from 30 organizations including Google and OpenAI recommended AI companies do bias bounties and create a third-party bias and safety audit market, among ways to turn AI ethics principles into practice.

The news follows a study last month that found major automated speech detection systems were more likely to recognize white voices than black voices. Last year, researchers found OpenAI’s GPT-2 generates different responses when prompted with race-related language.

Google debuts AI in Google Translate that addresses gender bias

Image Credit: Reuters

Google today announced the release of English-to-Spanish and Finnish-, Hungarian-, and Persian-to-English gender-specific translations in Google Translate that leverage a new paradigm to address gender bias by rewriting or post-editing initial translations. The tech giant claims the approach is more scalable than an earlier technique underpinning Google Translate’s gender-specific Turkish-to-English translations, chiefly because it doesn’t rely on a data-intensive gender-neutrality detector.

“We’ve made significant progress since our initial launch by increasing the quality of gender-specific translations and also expanding it to 4 more language-pairs,” Google Research senior software engineer Melvin Johnson wrote. “We are committed to further addressing gender bias in Google Translate and plan to extend this work to document-level translation, as well.”

As Johnson explains, the old classifier used for Turkish-to-English gender-specific translations — which was laborious to adapt to new languages — failed to produce masculine and feminine translations independently using a neural machine translation (NMT) system. Moreover, it couldn’t show gender-specific translations for up to 40% of eligible queries because the two translations often weren’t exactly equivalent except for gender-related phenomena.

Google Translate bias

By contrast, the new rewriting-based method first generates translations and then reviews them to identify instances where a gender-neutral source phrase yielded a gender-specific translation. If that turns out to be the case, a sentence-level rewriter spits out an alternative gendered translation, and both the first and rewritten translations are reviewed to ensure gender is the only difference.

According to Google, building the rewriter involved generating millions of training examples composed of pairs of phrases, each of which included both masculine and feminine translations. Because the data wasn’t readily available, the Google Translate team had to come up with candidate rewrites by swapping gendered pronouns from masculine to feminine (or the other way around), starting with a large monolingual data set. To this corpus of rewrites, engineers applied an in-house language model trained on millions of English sentences to select the best candidates, which netted training data that went from a masculine input to a feminine output and vice versa.

Google Translate bias

After merging the training data from both directions, the team used it to train a one-layer Transformer-based sequence-to-sequence model. Then, they introduced punctuation and casing variants in the training data to increase the model robustness, such that the final model can reliably produce the requested masculine or feminine rewrites 99% of the time.

Evaluated on a Google-developed metric called bias reduction, which measures the relative reduction of bias between the new translation system and the existing system (where “bias” is defined as making a gender choice in translation that’s unspecified in the source), Johnson says the new approach results in a bias reduction of ≥90% for translations from Hungarian, Finnish, and Persian to English. The bias reduction of the existing Turkish-to-English system improved from 60% to 95%, and the system triggers gender-specific translations with an average precision of 97% — i.e., when it decides to show gender-specific translations, it’s right 97% of the time.

The improved Google Translate system’s rollout comes months after Google removed the ability to label people in images as “man” or “woman” with its Cloud Vision API. Separately, in January 2018, Google blocked Smart Compose, a Gmail feature that automatically suggests sentences for users as they type, from suggesting gender-based pronouns.

A gender-neutral approach to language translation and computer vision is a part of Google’s larger effort to mitigate prejudice in AI systems. The Mountain View company uses tests developed by its AI ethics team to uncover bias and has banned expletives, racial slurs, and mentions of business rivals and tragic events from its predictive technologies.

Facebook's Climate Conversation Map reveals reactions to environmental news

Facebook signage at 2016 F8 conference in San Francisco.
Image Credit: Jordan Novet / VentureBeat

In partnership with organizations including the World Resources Institute and the Yale Program on Climate Change, Facebook today released the Climate Conversation Map, a set of maps that tap aggregated, anonymized data to highlight where, when, and how often users share or react to climate change-related links. It’s available by request to research partners and nonprofits following a preview last year, and it now provides new information including data and insights into how conversations ebb and flow over time.

The maps could be used by organizations to visualize the rate of engagement with climate-related news, and by extension to study the impact it might have on global sentiment. Unsurprisingly, studies show that climate change opinion differs throughout the world — according to a report from the European Investment Bank, 80% of Chinese people believe that climate change is irreversible compared with 25% of Italy and Spain.

As with Facebook’s other data-driven maps, the Climate Conversation Map is created with an automated system that pulls the daily volume of total external link shares on Facebook as well as the number of reshares and comments/reactions to the links. The system then flags the subset of links containing the keywords “climate change” or “global warming” across 21 major languages, determined by both population size and the number of active Facebook users.

Facebook climate maps

Above: A map of climate sentiment by country. Darker regions indicate more engagement with climate change-related content.

Image Credit: Facebook

The information is anonymized and collated each week, and in areas where the number of people sharing the links is greater than 10, the system computes the absolute number of links related to climate change and the relative percentage of total links shared. The result is a series of color-coded world maps that update regularly, where green areas indicate climate change conversation hotspots and lighter green indicates less active areas.

Facebook says that in September, it piloted the Climate Conversation Map for several organizations during the U.N. General Assembly and Climate Week in New York City, and that more recently, it shared the map with more than 100 members of its Data for Good program. In the interim months, Facebook worked with social analytics platform CrowdTangle to build the Climate Conversation Live Display, a public dashboard of searches for “climate change” and “global warming” in the same 21 languages used in the Climate Conversation Map.

“We are excited to share these tools with leading researchers and nonprofits in the climate space,” wrote Facebook in a blog post. “We look forward to seeing how different organizations use these maps to inform the climate debate and deliver new resources.”

Facebook’s work in AI and mapping comes after the social network shared to the OpenStreetMap community the Map With AI tool, which automates several of the most time-consuming steps involving in annotating roads, buildings, and bridges. Additionally, the Menlo Park company recently made available TapiD, RapiD, and an AI-powered version of OpenStreetMap’s editing tool iD, in addition to AI-generated road mappings in Afghanistan, Bangladesh, Indonesia, Mexico, Nigeria, Tanzania, and Uganda.

Google's AI teaches robots to grasp occluded objects and adapt to new situations

In a pair of papers published on the preprint server Arxiv.org this week, Google and University of California, Berkeley researchers describe new AI and machine learning techniques that enable robots to adapt to never-before-seen tasks and grasp occluded objects. The first study details X-Ray, an algorithm that when deployed on a robot can search through heaps of objects to grasp a target object, while the second lays out a policy adaptation technique that “teaches” robots skills without requiring from-scratch model training.

Robot grasping is a surprisingly difficult challenge. For example, robots struggle to perform what’s called “mechanical search,” which is when they have to identify and pick up an object from within a pile of other objects. Most robots aren’t especially adaptable, and there’s a lack of sufficiently capable AI models for guiding robot hands in mechanical search.

X-Ray and the policy adaptation step could form the foundation of a product-packaging system that spots, picks up, and drops a range of objects without human oversight.

X-Ray

The coauthors of the study about X-Ray note that mechanical search — finding objects in a heap of objects — remains challenging due to a lack of appropriate models. X-Ray tackles the problem with a combination of occlusion inference and hypothesis predictions, which it uses to estimate an occupancy distribution for the bounding box (coordinates for a rectangular border around an object) most similar to an object while accounting for various translations and rotations.

X-Ray assumes that there’s at least one target object fully or partially occluded by unknown objects in a heap, and that a maximum of one object is grasped per timestep. Taking RGB images and target objects as inputs, it predicts the occupancy distribution and segmentation masks for the scene and computes several potential grasping actions, executing the one with the highest probability of succeeding.

To train and validate X-Ray, the researchers produced a corpus of 10,000 augmented depth images labeled with object occupancy distributions for a rectangular box target object. Sampling from an open source data set of 1,296 3D CAD models on Thingiverse, they selected 10 box targets of various dimensions with equal volume but small thickness, so that they were more likely to be occluded. This netted them a total of 100,000 images.

Google robot grasping occluded

Above: A diagram illustrating the X-Ray technique.

Image Credit: Google

About 8,000 of those 10,000 images were reserved for training, and the rest were set aside for testing. One thousand additional images containing simulated objects — a lid, a domino, and a flute — were used to evaluate X-Ray’s generalization to unseen shapes, objects, aspect ratios, and scales.

In physical experiments involving a real-world ABB YuMi robot with a suction cup and a parallel jaw gripper, the researchers tasked X-Ray with filling a bin with objects and then dumping the bin on top of the target object. In heaps initially containing 25 objects, the system extracted the target object in a median of 5 actions over 20 trials with a 100% success rate.

The coauthors leave to future work increasing X-Ray’s training efficiency and analyzing the effect of data set size and the number of translations and rotations used to generate training distributions. They also plan to explore reinforcement learning policies based on the reward of target object visibility.

Policy adaptation

In the more recent of the two papers, the coauthors sought to develop a system that continuously adapts to new real-world environments, objects, and conditions. That’s in contrast to most robots, which are trained once and deployed without much in the way of adaptation capabilities.

The researchers pretrained a machine learning model to grasp a range of objects on a corpus of 608,000 grasp attempts, which they then tasked with grasping objects using a gripper moved 10 centimeters to the right of where it started. After the system practiced gripping for a while (over the course of 800 attempts) and logged those attempts into a new data set — a target data set — the new attempts were mixed in 50% of the time with the original data set to fine-tune the model.

Google robot grasping adaptation

Above: The model adaptation training process, in schematic form.

Image Credit: Google

These steps — pretraining, attempting a new task, and fine-tuning — were repeated for five different scenarios. In one, harsh lighting impeded the robot’s cameras; in another, a checkerboard-patterned background made it difficult for the model to identify objects. Lastly, the experimenters had the robot grasp transparent bottles not seen during training (transparent objects are notoriously hard for robots to grasp because they sometimes confuse depth sensors) and pick up objects sitting on a highly reflective sheet metal surface.

The researchers report that in experiments, the model successfully grasped objects 63% of the time in harsh lighting, 74% of the time with transparent bottles, 86% of the time with a checkerboard backing, 88% of the time with an extended gripper, and 91% of the time with an offset gripper. Moreover, they say that it only took 1 to 4 hours of practice for the robot to adapt to new situations (compared with roughly 6,000 hours learning how to grasp) and that performance didn’t degrade the more the model adapted.

In the future, the team plans to investigate whether the process can be made automatic.

QuillBot taps AI to rewrite and rephrase whole paragraphs

Image Credit: raindrop74 / Shutterstock

QuillBot, a startup developing AI tools that intelligently rewrite text, today announced that it raised $4 million in financing. Fresh capital in hand, the cofounders hope to make QuillBot’s platform a one-stop editing shop, with modules that will summarize information from articles and complete paragraphs by synthesizing sentences informed by intent. QuillBot also plans to establish an R&D lab to conduct and publish AI and machine learning work, with an emphasis on natural language processing (NLP).

QuillBot is the brainchild of the University of Illinois (UIUC) dropout Anil Jason and 2017 alums Rohan Gupta and David Silin. Jason and Silin collaborated on a senior thesis project involving a system that, given an article, generated multiple-choice questions about the content of that article. The pair applied to the college’s iVenture accelerator along with Gupta, where they made a breakthrough in paraphrase generation.

QuillBot’s eponymous product rephrases up to whole paragraphs of text, which isn’t on its face novel — services like WordAi, Chimp Rewriter, Grammarly, and indeed Microsoft Word purport to do the same. But the company claims it leverages state-of-the-art techniques that give its platform an advantage where grammatical and syntactical correctness are concerned. For instance, it co-learns with users so that the more people use QuillBot, the better its suggestions and understanding of English becomes.

QuillBot

Above: QuillBot’s online tool.

Image Credit: QuillBot

QuillBot is chiefly targeting college students who write extensively in a language that’s not their first, as well as non-native professionals who communicate about complex topics. (The company says that of the roughly a million people who use its tool, 60% are non-native English speakers.) It’s a large market — according to the Ethnologue project, over 740 million people use English as their second language.

The vast majority of people use QuillBot as a grammar checker, fluency enhancer, and muse, according to Jason, as well as a sort of “smart thesaurus.” He acknowledged that it could be used for plagiarism, in theory — which neither he nor the company condones — but said that QuillBot’s engagement data shows it’s not a typical use case.

“We analyze users’ behaviors and patterns to determine this. If users are changing every other word in a sentence, it indicates they are likely plagiarizing,” said Jason. “However, most of our users only lightly edit the output, indicating that they are using this as a legitimate writing aid.”

QuillBot

Above: QuillBot’s Microsoft Word plugin.

Image Credit: QuillBot

Chicago-based QuillBot — which was bootstrapped until now and which is profitable — makes money by charging for a premium service ($14.95 per month) that unlocks access to Microsoft Word and Google Docs plugins, a Chrome extension, and support for simultaneous processing of up to 10,000 characters and 15 sentences. Additionally, it offers an API that enterprises can use to integrate QuillBot with chatbots, word processors, and other existing platforms.

GSV Ventures and Sierra Ventures led the seed funding round.

H1 Insights raises $12.9 million for AI that helps companies find health care professionals

health care chest scan

H1 Insights, a startup developing a platform that connects health care and life science professionals and companies, today announced that it raised $12.9 million. With the equity funding, the company plans to further develop its products, which tap AI and machine learning to identify thought-leading doctors in a given disease area.

H1’s service was already lucrative — year-over-year revenue grew 350% in 2019 compared with 2018 — but there’s been renewed interest in light of the novel coronavirus pandemic. Over 35 life sciences, biotech, and pharmaceutical companies (including Novartis, Baxter, and the University of California, San Francisco) are actively using H1’s platform to seek out experts in various health fields, with the goal of accelerating the adoption of new treatments and medical equipment.

“This powerful of a solution in the market didn’t exist before H1,” CEO Ariel Katz told VentureBeat via email. “Linkedin Sales Navigator is used in other industries to find professionals to engage with, but in healthcare, H1 is starting to be used in the same way to find, engage, and keep up to date with healthcare professionals. Within the healthcare industry, we focus on pharma, biotech, medical devices, hospitals, health systems, and medical schools. As an example, within pharma there are several use cases for our data, including identifying healthcare professionals for clinical trials to reading the latest publication by a leading healthcare professional.”

The data set underlying H1’s platform comprises over 8 million researcher, physician, pharmacist, nurse, and administrator profiles across 70 countries, 16,000 health care organizations, 160 million peer-reviewed publications (and citation counts), 350,000 clinical trials, 2 billion procedures, 3 billion diagnoses, 700 medical societies, and proprietary scholarly metrics that are kept up to date weekly. The company claims to capture information on over 95% of health care professional-patient interactions in the U.S., including their speaking history, which can be used to identify treatment leaders and understand their procedural and diagnostic tendencies.

H1 Insights

H1 customers can search by relevant codes to identify the physicians doing work directly related to their service line. (Every professional on the platform has a profile listing their publications, trials, payments, congresses, social media history, and more, as well as contact information including email, phone number, and mailing address.) Alternatively, they can filter by institution type and cross-reference the data with referral trends to see which practitioners need referral partners, and then export this information to customer relationship management services like Veeva, Salesforce, and Microsoft Dynamics.

H1 recently began offering free access to professionals on its platform in need of medical supplies, in an effort to combat the pandemic. For a limited time, users can view a subset of H1’s corpora that includes journal articles, trials, open payments, and science congresses.

This latest round of funding — a series A — was led by Menlo Ventures, with participation from Novartis DRX, Y Combinator, Baron Davis Enterprises, ClearPoint Investment, Jeff Hammerbacher, Liquid 2 Ventures (a seed stage fund led by Joe Montana), and Underscore VC. As a part of it, Menlo Ventures partner Greg Yap will join H1’s board of directors.

New York-based H1 was cofounded by Katz and Zachary Feuerstein in 2017, and it has 25 full-time employees. To date, it’s raised close to $20 million.

Maccabi to deploy AI that identifies patients at risk of developing COVID-19 complications

Image Credit: iStock / Steve Debenport

An HMO developing a platform for the discovery of clinical insights from medial information, Israel’s Maccabi Healthcare Services, plans to deploy an AI system that can identify people at risk of developing COVID-19 complications. The work is being done in partnership with the Kahn-Sagol-Maccabi Research and Innovation Institute and Medial EarlySign and looks at factors like preexisting conditions, the three organizations announced today. The AI could enable the HMO to fast-track patients for testing at a time when COVID-19 test kits are tough to come by. As of early this month — citing a growing shortage of reagents — the Israel Health Ministry said it wouldn’t test people for COVID-19 symptoms unless they had recently traveled.

According to Medial EarlySign CEO Dr. Jeremy Orr, the COVID-19 risk detection system shares elements with the company’s existing flu complications model, whose design was informed by an analysis of tens of millions of people treated by Maccabi and billions of lab results, structured electronic health record data, vital signs, demographics, and other data points. Following a pilot analysis of Maccabi patients’ anonymized records, it identified the top 2% of highest-risk patients (approximately 40,000 people), taking into account variables like:

  • Age
  • Respiratory disease such as pneumonia, bronchiolitis, and influenza
  • Hospital admission history
  • Weight and BMI
  • Medications prescribed for respiratory illnesses or conditions, such as asthma and cough
  • Heart disease
  • Smoking history
  • Diabetes
  • Digestive disease
  • Immunosuppression therapies

When a person flagged by the system as high risk contacts a nurse or a doctor to report COVID-19-like symptoms, the system automatically notifies the medical professional of that person’s status. From there, the potentially infected person can expect priority treatment at Maccabi testing facilities and drive-in testing stations. Alternatively, they might be offered an at-home test that can be administered remotely.

Medial EarlySign says it’s in “advanced negotiations” with health systems in the U.S. that have expressed interest in incorporating the algorithm into their COVID-19 health care protocols.

Efforts to apply AI to COVID-19 patient data are underway within ICUs, too. At Stanford University, a team led by physician Ron Li is evaluating whether an algorithm trained on over 130,000 patient encounters — Deterioration Index — could accurately identify which COVID-19 patients’ condition will deteriorate. Bayesian Health, a startup spun out of Johns Hopkins University, is working on an early warning model for acute respiratory distress syndrome, a type of respiratory failure that can be caused by COVID-19. And the University of Chicago Medical Center is testing an upgrade to its AI eCart system that will monitor oxygen to signal when a COVID-19 patient’s lungs might be failing.

Bodhala raises $10 million to optimize companies' legal spend with AI

Bodhala
Image Credit: Bodhala

Legal tech startup Bodhala this week closed a $10 million round, the bulk of which will be put toward product R&D and sales efforts, cofounder and CEO Raj Goyle told VentureBeat. The company’s platform taps AI and machine learning to help companies analyze and optimize legal spend — a valuable service in light of the fact that equity partner profiles at the top 100 U.S. firms have doubled since 2004 (to $1.88 million in 2018), with eight firms averaging more than $4 million.

Bodhala was founded by Goyle and Ketan Jhaveri, who met at Harvard Law School and pursued careers in politics and law, respectively. Goyle was a member of the Kansas House of Representatives and ran for U.S. Congress in 2010, while Jhaveri worked for the U.S. Department of Justice and spent 10 years practicing antitrust law at Simpson Thacher Bartlett. They came together in 2014 and went to market with their legal spend management solution — Hercules — about two years later.

Hercules, which is hosted on Amazon Web Services, spotlights legal costs and firm performance by analyzing rates, as well as relationship- and practice-level discounts. It ingests years of information from law firms and e-billers, which it standardizes at the line-item level to flag alternative free arrangements and detect and correct inaccurate records. It then creates trillions of views to enable clients to examine things like practice areas, matter complexity, and efficiency while identifying trends to help forecast legal spend and suggesting ways that spend might be adjusted.

Hercules lets customers centralize their annual rate card process to measure the impact of law firm rate increases. It also measures the rates customers pay their firms against similar companies and spend types and reveals opportunities to improve management of outside counsel.

Bodhala says the average customer needs three weeks for quality assurance and standardization and to ensure that Hercules is receiving the correct data and any supplemental internal fields are properly mapped.

Markets and Markets anticipates that the legal AI software market will be worth $1.24 billion by 2024. Investors have poured tens of millions into firms like Disco, which streamlines legal discovery with AI and machine learning; Cognition IP, which taps AI that helps patent startups’ inventions; and LinkSquares, which expedites contract management with machine learning algorithms.

Edison Partners led New York-based Bodhala’s latest funding round, which comes after the startup experienced over 300% growth in both revenue and headcount in 2019. Goyle says it’s on track to do the same in 2020, with significant client growth across such verticals as financial services, health care services, insurance, energy, and private equity.

Einride will now develop human-driven trucks as part of transition to full autonomy

Einride: T-Pod

Above: Einride: T-Pod

Sweden’s Einride, which is building driverless electric trucks that can also be controlled remotely, will begin developing a more traditional type of truck with human drivers. This move acknowledges that the shift to autonomous transport may take longer than some had predicted and will likely be more gradual.

Stockholm-based Einride has partnered with German supermarket giant Lidl to supply trucks as part of an ongoing collaboration that will initially focus on electrification, with automation coming later. This should also go some way toward bolstering Lidl’s push to establish an emissions-free supply chain.

Founded in 2016, Einride has raised north of $30 million to develop electric trucks that have no space for a human driver. These “pods” can carry all manner of freight and have been tested on-site at customers’ facilities and on public roads in Sweden for the past year.

Einride T-log

Above: A rendering of the T-log.

Image Credit: Einride

Although Einride’s existing trucks are designed to drive themselves, many scenarios may require human intervention, including the need to circumvent complex or unusual obstacles or local regulations that restrict fully autonomous vehicles to certain roads. Einride’s solution so far has been to hire teleoperators who are trained to control multiple autonomous trucks from afar.

Einride operators will be able to control multiple autonomous trucks from a single remote drive station

Above: Einride operators will be able to control multiple autonomous trucks from a single remote drive station

But it appears this setup is not quite ready for prime time, which is partly why Einride is now working with Lidl on an interim solution that leverages elements of Einride’s business, minus the driverless pods.

All-electric

Einride said it’s now working with manufacturing partners to supply Lidl with electric trucks. While these vehicles will have human drivers inside, they will sport some of Einride’s other technology. This includes telematics hardware that serves data to Einride’s freight mobility platform to improve shippers’ efficiency by optimizing routes and schedules.

On the surface, today’s move could be symptomatic of autonomous transport’s slow route to mainstream adoption. Indeed, Einride first announced its tie-up with Lidl way back in 2017 and revealed plans to pilot a driverless delivery program in 2018. Since then, Einride has been testing its autonomous pods with Lidl and other companies, including logistics giant DB Schenker. Now Einride and Lidl are splitting this transition into different stages, the first of which will require human drivers. Einride CEO Robert Falck insists that diversification was always on the company’s roadmap and said it’s more about transforming the freight industry than building autonomous trucks per se.

“Einride has always been committed to transforming global transport holistically, not just with autonomy,” he told VentureBeat. “As such, diversification has long been part of the business plan.”

In other words, Falck is pitching this new Lidl partnership, alongside the freight platform and new electric trucks, as necessary steps on the way to a fully autonomous future.

“The realization of autonomous freight solutions on a global scale is closer than ever — the introduction of human-driven electric trucks and the Einride freight mobility platform are part of our systemic approach to revolutionizing the road freight industry, creating a clear path to a connected, emissions-free, and fully automated future with actionable steps today,” Falck added.

The initial electrification program will be focused on the Stockholm region, where Lidl will transport goods from its central warehouse to stores in the area. The first such deliveries are expected to begin this fall, with plans to expand to other stores across Sweden in the future.

Intel's Project Corail monitors coral reef health with AI

Intel Project Corail
Image Credit: Intel

To commemorate Earth Day, Intel — in partnership with Accenture and Sulubaaï Environmental Foundation, a Philippine-based nonprofit dedicated to protecting Palawan’s natural resources — detailed Project Corail, an AI-powered platform that monitors and analyzes the resiliency of coral reefs. Since its launch in May 2019, it’s collected 40,000 images of the reef surrounding Pangatalan Island, which researchers have leveraged to gauge reef health in real time.

If the pilot program in Palawan is successful, Project Corail could be used to monitor more of the world’s at-risk reef population. (Stresses such as pollution, overfishing, and global climate change will kill an estimated 90% of reefs in the next century.) It’s a worthwhile mission considering that reefs not only protect coastlines from tropical storms, but that they provide food and income for 1 billion people, generating $9.6 billion from tourism and recreation alone each year.

Project Corail consists of a buoy equipped with marine-grade solar panels, batteries, and a transmitting device (either Wi-Fi or 4G), as well as a camera attached to the mooring line. An Intel Neural Compute Stick 2 plugged into a Raspberry Pi handles on-buoy computing, while an onshore PC processes images with an Intel Arria 10 FPGA.

Intel Project Corail

Above: A schematic layout out Project Corail.

Image Credit: Intel

Deploying the buoy required building a concrete underwater platform — Sulubaaï’s Sulu-Reef Prosthesis — that provides strong support for unstable coral fragments. According to Intel, the Sulu-Reef Prosthesis incorporates fragments of living coral that will grow and expand, providing a habitat for fish and marine life.

Images captured by Project Corail are transmitted from the camera to the PC at regular intervals via an Azure app running on the Pi. A custom-trained machine learning model — a part of Accenture’s Applied Intelligence Video Analytics Services Platform (VASP) — counts and classifies the marine population, and the data funnels from the PC to Microsoft Azure for collation in reporting dashboards.

Intel Project Corail

Above: A Project Corail camera module attached to a mooring line.

Image Credit: Intel

In the future, the Project Corail team plans to deploy a better-optimized AI classifier model and a backup power supply. They’re also considering infrared cameras, which would allow for night shots, and adapting the platform to study tropical fish migration rate and human intrusions into restricted underwater areas.

Project Corail isn’t Intel’s first foray into AI-enabled environmental conservation. Last year, the chipmaker announced TrailGuard AI, a collaboration with the nonprofit Resolve that uses motion-detecting, data-transmitting cameras to curb African elephant poaching. More recently, Intel teamed up with Microsoft’s AI for Earth initiative and data science consultancy Gramener to help ecologists keep tabs on Antarctic penguin populations.

Ready or not, the automation age is suddenly upon us

Guest
Image Credit: Siberian Photographer/Getty images

After COVID-19, there’s no going back to the way things were. There won’t be sporting events or concerts with thousands of people sitting in close proximity for a long time. The open office floor plan will likely become a relic of a bygone era. It is, as The New York Times stated recently, the end of the economy as we knew it.

What we’re witnessing is the epitome of a “burning platform,” a metaphor for a crisis that demands drastic change. Much like a wartime atmosphere, we’re in a moment when changes such as universal healthcare and universal basic income suddenly seem possible.

There is no going back, and looking forward offers no clear view of just what comes next, only radical uncertainty and possible scenarios. We are indeed looking through a glass darkly.

The next several years will be tumultuous amid a severe economic downturn as society seeks the “next normal.” What we do know is that existing trends will be accelerated from sheer economic necessity even as new trends emerge. These rapid changes will include the acceleration of enterprise digital transformation and the automation of work.

During this time AI-driven automation led by computer vision, IoT sensor arrays, and pervasive connectivity will accelerate the replacement of humans in the production process. Not just because automation technologies are now capable of performing these tasks but also because — unlike humans — machines do not get sick, go on strike, or stop production.

Recent examples of meatpacking plant closings highlight the problem. “The whole system is gummed up,” Iowa State University agricultural economist Dermot Hayes said in a USA Today story. “It’s not just the farm and the packer. It’s all along the chain.”

2025: AI-powered automation is part of the next normal

With digital transformation accelerating, that chain could look very different a few years from now. Already, a highly automated meatpacking plant that employs fewer people is remaining open. A more fully automated food supply chain could reduce future scenes of empty store shelves, long lines at food banks, and crops rotting in the fields. Over the next five years, AI-powered automation will be far advanced and an ingrained feature of the next normal.

Starting with the farm, 75 million connected agricultural devices such as drones, tractors, and sensors are already producing volumes of data for analysis and insight. For example, Bear Flag Robotics is using computer vision for autonomous tractors that can work fields around the clock without a driver, using sensors like those in autonomous road vehicles. Workers at NatureFresh Farms don’t need to patrol the aisles of their 185 acres of greenhouses to see if crops are ripe. Instead, robotic cameras collect images of plants, feeding the data into AI algorithms that calculate exactly when blossoms will transform themselves into fully ripe vegetables.

Harvesting and sorting are also using AI-powered automation. Agricultural robots (known as “agribots”) are starting to be used to harvest crops at a higher volume and faster pace, more accurately identify and eliminate weeds, and reduce costs. For example, Harvest CROO Robotics  has developed a strawberry picking agribot. BBC Technologies uses computer vision to classify and sort up to 2,400 individual blueberries every second to determine which should be eaten in a week or would be better for a long journey from South America to the United States.

Beyond farm and ranch, retailers will know exactly what products their customers want, how much, where, and when. Many of these applications are still in early stages of maturity and adoption. Necessity will drive rapid advances and within several years could approach maturity, resilience, and widespread implementation. By 2025, Agribots, “lights out” warehouses, cashierless checkout, and home delivery by autonomous vehicles will move from curiosities to commonplace.

Broad impact on industry and society

The food sector is hardly the only one that will speed-up AI adoption and automation because of the crisis. Similar trends are taking place in banking, retail, consumer goods, insurance, pharmaceuticals and others.

A survey of nearly 800 executives worldwide by Bain & Company estimates the number of companies scaling up automation technologies will double in the next two years. The firm noted: “As companies adapt to new routines and prepare for a pending downturn, automation solutions that might have been years away a few months ago, are suddenly right around the corner.” Examples of manufacturers who plan to speed up automation efforts include LG Electronics, VW, and Hyundai.

The debate about what impact AI-driven automation will have on jobs and the workforce, both in terms of timing and reach, has so far been mostly theoretical. This is because large-scale automation throughout industry supply chains has not been technically feasible to date. That is changing rapidly. At the same time, the crisis will force businesses to adopt lower cost and higher efficiency technologies as a matter of survival. Increased automation will add much needed resilience to complex systems and help to better manage future pandemics and other black swan events.

Kevin Scott, CTO at Microsoft, is optimistic about jobs, claiming that AI will create new opportunities. His view is similar to others who believe that AI and robotics will drive economic growth and release people from performing mundane, boring, and unfulfilling tasks, while creating new roles. However, those new roles have not really materialized, at least not yet, while the crisis at hand will push automation forward at an accelerated rate.

Perhaps those hoped-for new job categories will still emerge. Economics Professor Johannes Moenius at the University of Redlands noted that the workforce that companies will need after the recovery will likely look quite different from the workforce needed before. People will increasingly need a blend of technical abilities to develop and administer automation tools combined with social, collaboration, and design skills.

While AI has been around in various forms for 70 years, it is the decade ahead that will mark the true beginning of the AI era, when it moves from game master to broad industry adoption. This transformation will have profound impacts on society.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

Facebook's AI detects fake accounts with fewer than 20 friend requests

In a paper highlighted today in a Facebook blog post, engineers describe an algorithm — SybilEdge — to detect fake accounts that evade Facebook’s anti-abuse filters at registration time but that haven’t friended enough people to perpetuate abuse. The goal is to mitigate the accounts’ ability to launch attacks against other users, in part by comparing the way users add friends to their extended social networks.

SybilEdge — which can detect fake Facebook accounts less than a week old with fewer than 20 friend requests — has immediate application for platforms dealing with a wave of misleading information about the coronavirus pandemic. An analysis published by the Reuters Institute for the Study of Journalism at the University of Oxford found that 33% of people have seen some form of misinformation about COVID-19 on social networks like Twitter, Facebook, and YouTube.

In architecting SybilEdge, the development team noted that abusers need to connect to targets in order to launch abuse — that is, they need to find targets, send them a friend request, and have the request accepted. Perhaps unsurprisingly, internal Facebook studies revealed that non-abusers differ in both their selection of friends and those friends’ responses to their friend requests: Fake accounts’ requests were rejected more often than real users’ requests. In addition, fake accounts were often careful when picking their friend request targets, likely to maximize the probability of their requests being accepted.

Facebook created a corpus with which to train SybilEdge by segmenting users into two groups: those more likely to accept friend requests from real accounts and those likely to accept fake account requests. If the former rejects an incoming request, it serves to signal that the requester is a legitimate user. On the other hand, if the users who accept more fake requests accept a request, it indicates that the requester was likely fake.

SybilEdge works in two stages. First, it’s trained by observing the aforementioned samples over time, after which it leverages outputs from Facebook’s behavioral and content classifiers that flag accounts based on actual abuse. This training phase provides the model with all the necessary parameters (i.e., configuration variables estimated from data and required by the model when making predictions) to run in real time for each friend request and response and update the probability of the requester being fake.

Facebook claims that SybilEdge is above 90% accurate at detecting fake accounts with 15 or fewer friend requests on average and 80% accurate at detecting fake accounts with 5 friend requests. Moreover, unlike the baselines with which it was compared, its performance doesn’t degrade with more friend requests (over 45).

“SybilEdge helps us identify abusers quickly and in a way that can be explained and analyzed. In the near future, we plan to look at additional ways that can further speed up the detection of abusive accounts and help make confident decisions even faster than SybilEdge. We plan to accomplish this by mixing feature-based and behavior-based models,” wrote Facebook.

Facebook is broadly moving toward an AI training technique called self-supervised learning, in which unlabeled data is used in conjunction with small amounts of labeled data to produce an improvement in learning accuracy. Facebook’s deep entity classification (DEC) machine learning framework was responsible for a 20% reduction in abusive accounts on the platform in the two years since it was deployed. And in a separate experiment, Facebook researchers were able to train a language understanding model that made more precise predictions with just 80 hours of data compared with 12,000 hours of manually labeled data.

Nvidia launches Project MONAI AI framework for health care research in alpha

Nvidia, in conjunction with King’s College London, announced the open source alpha release of Project MONAI today, a framework for health care research that’s available now on GitHub. MONAI stands for Medical Open Network for AI. The framework is optimized for the demands of health care researchers and made for running with deep learning frameworks like PyTorch and Ignite. A main goal of the MONAI framework is to help researchers reproduce their experiments in order to build upon each other’s work. One example in the alpha release is data augmentation during training, with defined interfaces to control random states and ensure training results stay the same, Nvidia VP of healthcare Kimberly Powell told VentureBeat in an email.

“Reproducibility of scientific research is of paramount importance, especially when we are talking about the application of AI in medicine,” Jayashree Kalpathy-Cramer, scientific director of MGH & BWH Center for Clinical Data Science, said today in Powell’s blog post. “Project MONAI is providing a framework by which AI development for medical imaging can be validated and refined by the community with data and techniques from the world over.”

Reproducibility, or the ability to repeat results, is an essential part of the scientific method and helps ensure the creation of robust machine learning models. A need for more reproducibility in AI research last year led machine learning conferences like ICML to encourage or require submission of code together with research papers.

The AI framework will be tied into Nvidia’s Clara medical imaging tools in the future. The alpha version of MONAI includes examples for tasks like 3D organ segmentation for abdominal CT or 2D classification for brain MRI imagery.

The team first revealed the plan for a common framework to share best practices in health care imaging to collaborators at the Medical Image Computing and Computer Assisted Intervention Society conference last fall. Researchers and engineers at Nvidia and King’s College London began work on the framework in January, Powell said.

Nvidia and King’s College London created MONAI in collaboration with medical researchers from China, Germany, and the U.S., from institutions including Chinese Academy of Sciences, the German Cancer Research Center, MGH & BWH Center for Clinical Data Science, Stanford University, and the Technical University of Munich.

The news comes days after Nvidia shared that a consortium of hospitals in the U.S. and Brazil created more accurate mammography AI for breast cancer screenings by combining resources through federated learning than any model from any single institution managed on its own.

Last fall, Nvidia also worked with King’s College London to use federated learning to create a neural network for brain tumor segmentation.

Facebook partners with AWS on PyTorch 1.5 upgrades, like TorchServe for model serving

Facebook’s PyTorch has grown to become one of the most popular deep learning frameworks in the world, and today it’s getting new libraries and big upgrades, including stable C++ frontend API support and library upgrades like TorchServe, a model-serving library developed in collaboration with Amazon Web Services.

The TorchServe library comes with support for both Python and TorchScript models; it provides the ability to run multiple versions of a model at the same time or even roll back to previous versions in a model archive. More than 80% of cloud machine learning projects with PyTorch happen on AWS, Amazon engineers said in a blog post today.

PyTorch 1.5 also includes TorchElastic, a library developed to allow AI practitioners to scale up or down cloud training resources based on needs or if things go wrong.

An AWS integration with Kubernetes for TorchElastic enables container orchestration and fault tolerance. A Kubernetes integration for TorchElastic on AWS means Kubernetes users no longer have to manually manage services associated with model training in order to use TorchElastic.

TorchElastic is meant for use in large, distributed machine learning projects. PyTorch product manager Joe Spisak told VentureBeat TorchElastic is used for large-scale NLP and computer vision projects at Facebook and is now being built into public cloud environments.

“What TorchElastic does is it basically allows you to vary your training over a number of nodes without the training job actually failing; it will just continue gracefully, and once those nodes come back online, it can basically restart the training and start calculating variants on those nodes as they come up,” Spisak said. “We saw that [elastic fault tolerance] as a chance to partner again with Amazon, and we also have some pull requests in there from Microsoft that we’ve merged. So we expect basically practically all three major cloud providers to support that natively for users to do elastic fault tolerance in Kubernetes on their clouds.”

Work between AWS and Facebook on libraries began in mid 2019, Spisak said.

Also new today: A stable release of the C++ frontend API for PyTorch can now translate models from a Python API to a C++ API.

“The big deal here is that with the upgrade to C++, with this release, we’re at full parity now with Python. So basically you can use all the packages that you can use in Python, all the modules, optim, etc. All those are now available in C++; it’s full-parity documentations of parity. And this is something that researchers have been wanting and frankly production users have been wanting, and it gives basically everyone the ability to basically move between Python and C++,” Spisak said.

An experimental version of custom C++ classes was also introduced today. C++ implementations of PyTorch have been particularly important for the makers of reinforcement learning models, Spisak said.

PyTorch 1.5 has upgrades for staple torchvision, torchtext, and torchaudio libraries, as well as TorchElastic and TorchServe, a model-serving library made in collaboration with AWS.

Version 1.5 also includes updates for the torch_xla package for using PyTorch with Google Cloud TPUs or TPU Pods. Work on an xla compiler dates back to talks between employees at the two companies that started in late 2017.

The release of PyTorch 1.5 today follows the release of 1.4 in January, which included Java support and mobile customization options. Facebook first introduced Google Cloud TPU support and quantization and PyTorch Mobile at an annual PyTorch developer conference held in San Francisco in October 2019.

PyTorch 1.5 only supports versions of Python 3 and no longer supports versions of Python 2.

AI indicates which symptoms might be leading indicators of COVID-19

Image Credit: Sofiia Balitckaia / Shutterstock

In a preprint paper published this week on Arxiv.org, a team of researchers from the Mayo Clinic and Nference, a startup developing tech that analyzes text from biomedical publications, report that they’ve used AI to isolate phenotypes characteristic of the coronavirus. They claim that a specific combination of cough and diarrhea, along with anosmia (a loss of taste or smell) and excessive sweating, constitute some of the earliest electronic medical record-derived signatures of COVID-19, showing up at up to 4 to 7 days prior to testing.

The coauthors’ approach could be used to spot and triage early cases of coronavirus, perhaps lightening the load on overwhelmed hospitals. While there’s no cure for COVID-19 yet, preliminary studies suggest that early diagnosis can dramatically improve health outcomes.

To conduct their analysis, the team employed a natural language processing system designed to automate the recognition of diseases, drugs, phenotypes, and other entities; quantify the strength of contextual associations between those entities; and classify each association as “positive,” “negative,” or “other.” It incorporates Google’s Transformer architecture, which contains neurons (mathematical functions) arranged in layers that transmit signals from data and adjust the strength (weights) of each connection. All AI models learn to make predictions this way, but Transformers uniquely have attention such that every output element is connected to every input element. The weightings between each element are calculated dynamically.

The system ingested 8,22,9092 clinical notes of electronic medical records from the Mayo Clinic for 14,967 patients who had undergone PCR testing, a form of test used to detect antigen presence (272 patients in the data set were confirmed to have COVID-19). Symptoms and putative symptoms were extracted from the notes both a few weeks prior to and a few weeks after the date when the PCR test was taken.

The AI-extracted info reveals that diarrhea occurred in 43 COVID-19-positive patients (15.8%) in the week prior to PCR testing, whereas only 822 COVID-19-negative patients (5.6%) had diarrhea. An altered or diminished sense of taste or smell was also amplified in COVID-19 patients, as were, to a lesser degree, excessive sweating (31 patients, or 11.4%), fatigue (37, or 13.6%), headache (35, or 12.9%), and cough. Interestingly, despite evidence to the contrary, fever and chills were found to be somewhat nonspecific to those with COVID-19, at least in this patient population — 24.6% of COVID-19-positive patients had a fever a week prior to the PCR test versus 18.6% of COVID-19-negative patients.

In a further analysis of the data, out of 251 possible conjunctions of 27 phenotypes for COVID-19-positive compared with COVID-19-negative patients, two phenotypes — (1) cough and diarrhea and (2) sweating and diarrhea — were found to be “particularly significant.” Cough and diarrhea co-occurred in 36 patients with COVID-19 (13.2%) and in 486 of patients without COVID-19 (3.3%), indicating a 4-fold amplification, while diaphoresis and diarrhea co-occurred in 21 COVID-19 patients (7.7%) versus 204 patients without COVID-19 (1.4%).

“Our findings from the EHR analysis of COVID-19 progression can aid in a human pathophysiology enabled summary of the experimental therapies being investigated for COVID-19,” concluded the coauthors. “A caveat of relying solely on [electronic medical record] inference is that mild phenotypes that may not lead to a presentation for clinical care, such as anosmia, may go unreported in otherwise asymptomatic patients. As at-home serology-based tests for COVID-19 with high sensitivity and specificity are approved, capturing these symptoms will become increasingly important in order to facilitate the continued development and refinement of disease models. EHR-integrated digital health tools may help address this need.”

The work was a part of the Mayo Clinic’s ongoing collaboration with Cambridge-based Nference, a participant in Mayo’s Clinical Data Analytics Platform program. Since January, Nference’s chief focus has been identifying targets and biomarkers for new drugs, matching patients with therapeutic regimens, and devising applications such as label expansion, postmarketing surveillance, and drug purposing.

Google's DynamicEmbedding framework extends TensorFlow to 'colossal-scale' applications

Google AI logo
Image Credit: Khari Johnson / VentureBeat

In a preliminary whitepaper published this week on Arxiv.org, Google researchers describe DynamicEmbedding, which extends Google’s TensorFlow machine learning framework to “colossal-scale” applications with arbitrary numbers of features (e.g., search queries). According to Google, AI models developed on it have achieved significant accuracy gains over the course of two years, demonstrating that they can grow “incessantly” without having to be constantly retuned by engineers.

Currently, DynamicEmbedding models are suggesting keywords to advertisers in Google Smart Campaigns, annotating images informed by “enormous” search queries (with Inception), and translating sentences into ad descriptions across languages (with Neural Machine Translation). Google says that many of its engineering teams have migrated algorithms to DynamicEmbedding so that they can train and retrain them without much data preprocessing.

DynamicEmbedding could be useful in scenarios where focusing on the most frequently occurring data might cast aside too much valuable information. That’s because the framework grows itself by learning from potentially unlimited novel input, enabling it to self-evolve through model training techniques like transfer learning (where a model trained on one task is repurposed on a related task) and multitask learning (where multiple learning tasks are solved at the same time).

Building DynamicEmbedding into TensorFlow required adding a new set of operations to the Python API that take symbolic strings as input and “intercept” upstream and downstream signals when running a model. These operations interface with a server called DynamicEmbedding Service (DES) to process the content part of a model. This talks to a DynamicEmbeding Master module that divvies up and distributes the work among agents — called DynamicEmbedding Workers. Workers are principally responsible for allocating memory and computation and communicating with external cloud storage, as well as ensuring that all DynamicEmbedding models remain backward compatible.

Courtesy of a component called EmbeddingStore, DynamicEmbedding integrates with external storage systems like Spanner and Bigtable. Data can be stored in local cache and remote, mutable databases. This allows fast recovery from worker failure, as DynamicEmbedding doesn’t need to wait until all the previous data is loaded before accepting new requests.

Google says that in experiments DynamicEmbedding was shown to substantially reduce memory usage in training a model architecture known as Seq2Seq, which turns one sequence into another sequence. With 100 TensorFlow workers and a vocabulary size of 297,781, it needed between 123GB and 152GB of RAM, compared with TensorFlow’s 242GB of RAM to achieve the same level of accuracy.

In a separate experiment, the Smart Campaign model on DynamicEmbedding — which has been deployed in production for more than a year — outperformed non-DynamicEmbedding models in metrics such as click-through rate across 20 languages. In fact, the DynamicEmbedding-powered models won 49 out of a total of 72 revaluation metrics Google used for the dozens of different countries it evaluated.

“Our [Smart Campaign] model has been fed with new training data every month, and its size … has been automatically growing from a few gigabytes to hundreds of gigabytes in less than six months,” wrote the paper’s coauthors. They noted that as of February 2020 the Google Smart Campaign model contained over 124 billion parameters (the configuration variables estimated from data and required by the model when making predictions). “We hope that [DynamicEmbedding] can be used in a wide variety of machine learning applications [that] face challenges around ever-growing scale in data inputs.”

Alphabet’s Loon launches its first commercial service as balloons take flight over Kenya

Google Loon

Above: Googles Loon launched its first commercial internet service in Kenya using its balloons.

Image Credit: Loon

Alphabet’s long-gestating Loon project reached a major milestone today when the company officially launched its first-ever commercial service as its balloons took flight over Kenya. Loon, which uses weather balloons to deliver internet connectivity to remote areas, launched balloons over Kenya as part of a partnership with Telkom Kenya.

The achievement is a noteworthy step for Alphabet, which has often talked in grandiose terms about developing “moonshots” but has struggled to turn projects like autonomous vehicles into commercial products. In this case, the Loon breakthrough is also a significant symbolic victory for companies like Alphabet and Facebook’s efforts to extend internet connectivity to remote populations via a mixture of balloons and drones.

Originally dubbed Project Loon, Loon began as an experiment at Google back in 2011 and gradually evolved before being spun out into a standalone company. In early 2009, Loon created an advisory board of telecom insiders in an effort to accelerate its commercial ambitions.

Using balloons that travel 20km above the sea, Loon is able to network a constellation of balloons using algorithms that track their movements and distance to maintain delivery of internet connections.

The company had originally announced the Kenyan partnership back in 2018, with hopes of launching service commercially last year, but various delays caused it to be pushed back to this month. The Kenyan government formally approved the service last month, which set up a race to get things off the ground, a task made more complicated by the coronavirus pandemic.

The balloons are launched from sites in Puerto Rico and Nevada. For today’s kickoff, that meant the Loon team had to get them in the air and then navigate them to Kenya, which is about 11,000km away.

According to a blog post by Loon CTO Salvatore Candido, the system uses software to automatically create a map that optimizes the flight path based on weather forecasts and can be continually adjusted. Each balloon takes a unique route to its final destination.

In the past year, the company made significant improvements to that navigation system after logging more than 1 million flight hours. Using machine learning, the system discovered that flying in zig-zag patterns was often more efficient than flying straight toward a destination. And flying in figure eight patterns rather than circles helps the balloons remain over a given area for a longer period of time.

While a ground crew still monitors the system, Loon’s ability to automate and rapidly learn from environmental conditions has proven critical in making the service commercially viable.

“The Loon team is excited to bring service to people in places that previously had little or no connectivity in Kenya,” Candido said. “As I’ve often said, people have much more important needs than the internet — if you can bring folks food, clean water, or medical supplies, do that first. But as humanity copes with the COVID-19 pandemic and we find ourselves physically distancing from our friends, colleagues, and family, it is our ability to stay in touch online that is keeping us informed, together, and connected.”

ForgeRock raises $93.5 million to automate identity and access management

ForgeRock
Image Credit: ForgeRock

Digital identity management solutions provider ForgeRock today announced that it raised $93.5 million in equity financing. CEO Fran Rosch says the fresh capital will enable the company to invest in R&D, cloud infrastructure, global sales, and promotion of its new and existing solutions, including AI tools.

ForgeRock’s product suite could be used — and is already being used by over 1,100 organizations including AutoZone, Philips, Geico, the BBC, BMW, Pearson, and Deloitte — to beat back the recent raft of identity compromise. According to private consultancy Javelin Strategy & Research, from 2016 to 2017, fraudulent takeovers of consumers’ accounts jumped 120%, and victims spent an average of $290 and 16 hours to fix those problems.

“This fundraising comes on the heels of a transformational year where ForgeRock … cemented [its] position as a leader in this fast-growing category. With enterprises moving off of legacy identity solutions for both workforce and consumer identity use cases, we knew we would need additional outside capital to maintain our strong momentum,” Rosch told VentureBeat via email. “This funding will turbo-charge our mission to fundamentally change the way companies connect with their employees and customers with the only AI-powered platform built for consumers, workforce, and things.”

ForgeRock is the brainchild of ex-Sun Microsystems employees, who cofounded the company after Oracle’s acquisition of Sun in January 2010. They forked and built upon the code from Sun’s identity and access management software, which was scheduled to be phased out in favor of Oracle’s technology.

ForgeRock’s premiere product — the Identity Platform — comprises identity management, access management, user-managed access, directory services, edge security, and identity gateway, along with autoscaling that allows developers to deploy up to billions of user identities. The identity relationships of entire workforces can be managed across channels (including on-premises, in the cloud, and on mobile) without requiring that people give up control of profiles, passwords, or privacy settings.

With ForgeRock, users can perform self-service tasks like changing their password and controlling what data is shared for privacy reasons, subject to company policies checked during workflow and password reset/change processes. Intelligent Authentication — a visual drag-and-drop tool that lets developers configure, measure, and adjust single sign-on login journeys — can take into account signals such as device, contextual, behavioral, user choice, analytics, and risk-based factors at login and optionally at authorization and resource access time.

Customers can use Identity Management to aggregate data from sources and create identity relationship models at a granular level (e.g., parents, children, and friends), which can be extended to devices users own or carry. These can define simple relationships such as a corporate laptop, personal phone, and leased car that can then be leveraged to make business orchestration and security decisions.

ForgePoint

Regardless, Identity Management provides data visualization to identify the relationships for any user, device, or thing to detect anomalies in access or provisioning. It also delivers an auditing service that gives security teams the ability to trace the lifecycle of users and their activity, which can be stored in a database for reporting purposes or sent to standard security information and event management (SIEM) solutions for analysis.

ForgeRock also measures risk with respect to entitlements, at least to the extent that it identifies high-risk entitlements and tags them with real-time synchronization so that users can view all entitlements currently under management. This functionality complements ForgeRock’s Identity Governance product, which sends notifications to designated reviewers to verify user access and kicks off workflow or provisioning processes when the review is complete.

IT teams can protect against threats with the Identity Platform’s dynamic orchestration and intelligence engine, which captures rich context to make access decisions. Beyond that, ForgeRock offers directory services that enable companies to prep for growth with a database designed to handle large transaction volumes, as well as a gateway that ensures protocols are enforced consistently across apps, APIs, and microservices.

One of the newest additions to the ForgeRock family — Autonomous Identity — taps AI and machine learning to automate activities like approving access requests, performing certifications, and predicting what access should be provisioned to users. It joins Identity Cloud, a preconfigured and managed identity-as-a-service solution built atop the existing Identity Cloud suite.

Most of ForgeRock’s solutions can be consumed as services or deployed as software, and they support a range of environments including internet of things, cloud, mobile, and enterprise. Identity Platform customers gain access to the ForgeRock Trust Network, an ecosystem of more than 75 partners that provides access to capabilities using the Identity Platform as the foundation. Specialized authenticators, fraud and risk management, behavioral biometrics, and identity proofing integrations from those partners are included free of charge.

ForgeRock isn’t alone in a global identity and access market that’s anticipated to be worth $22.68 billion by 2025. New York-based Socure nabbed $30 million in February 2019 for its cloud-based identity verification and fraud prevention solution. Global identity verification provider Onfido raised $50 million early last year, and troubled identity management firm Jumio recently found more stable footing and launched a new authentication product. More recently, identity and credentials verification firm Auth0 and Evident raked in $103 million and $20 million, respectively.

But San Francisco-based ForgeRock claims that its customers see as much as a 25% savings on implementation costs and a 50% return on investment on average. Moreover, the company says that last year, revenue eclipsed $100 million, annual recurring revenue grew 75%, and year-over-year business hit 30%.

Riverwood Capital led ForgeRock’s latest round of funding (a series E), with participation from existing investors Accel, Meritech Capital, Foundation Capital, and KKR Growth. It brings the company’s total raised to $230 million following an $88 million series D in March 2017.

ForgeRock says it has more than 600 team members across its offices in Bristol and London, U.K.; Paris and Grenoble, France; Vancouver, Canada; Oslo, Norway; Munich, Germany; Sydney, Australia; and Singapore, up from 50 employees in 2010.

Blue Prism raises over $120 million to bolster its robotic process automation suite

Blue Prism
Image Credit: Blue Prism

In a sign of the robotic process automation market’s continued strength in the face of an economic downturn, Blue Prism today announced that it has raised £100 million ($124 million) in equity financing at a valuation of around £1 billion ($1.24 billion). Chair and CEO Jason Kingdon says the fresh capital will be used to strengthen Blue Prism’s balance sheet while allowing investment in the company’s automation suite.

Robotic process automation — an industry that’s anticipated to be worth $10.7 billion by 2027, according to Grand View Research — is a form of workflow automation technology that taps AI to tackle digital tasks previously performed by humans. It’s recently come to the fore in light of the coronavirus pandemic — last month, Blue Prism launched a COVID-19 response task team, which worked with the U.K.’s National Health Service; the University of California, San Francisco; and the Leeds Building Society to automate HR, personnel, finance, vaccine development, and other health care support functions.

“In this environment, our [RPA solution is] arguably more important than ever in driving organizational adaptation and resilience, and our role as a strategic technology partner to our customers in many ways becomes more vital,” Kingdon told VentureBeat via email. “The duration and impact of this pandemic are at this stage unknown, and as a result we are taking action to invest and reinforce our product differentiation in preparation for the opportunities [that] will occur both in the short- and longer-term.”

Blue Prism was founded in 2001 by a group of automation experts to develop tech that could be used to improve organizational efficiency. In 2003, their first commercial product — Automate — launched in general availability, and in 2016 Blue Prism became a publicly traded company with a listing on the London Stock Exchange.

Blue Prism’s eponymous platform, which is built on Microsoft .NET, automates virtually any app on any platform — including mainframes, Windows machines, and the web — in environments ranging from terminal emulators to web browsers and services. It’s designed for multi-environment deployment models with both physical and logical access controls, with a centralized release management interface and process change distribution framework that provide a level of visibility.

The Blue Prism platform records system logins, changes in management, decisions, and actions taken by its software robots to identify statistics and operational analytics. It supports regulatory, security, and governance contexts such as PCI-DSS, HIPAA, and SOX, and its process coding is automated on the backend to allow users to program processes using a drag-and-drop interface.

Blue Prism customers gain access to a reusable library of processes and objects for developing automations. It’s also scalable — the company says its top 50 clients use 500 software robots, on average. Blue Prism’s Process Discovery module takes snapshots of work queues at defined periods to gather activity and metrics and collates everything in a shareable dashboard. And its Decipher module, which recently exited beta, ingests, extracts, and transforms data from documents like vendor contracts, claims forms, emails, spreadsheets, purchase orders, and field reports.

Enterprise customers gain access to Blue Prism Cloud, a fully integrated, software-as-a-service offering with pre-integrated skills. From the Blue Prism Cloud Hub, they’re able to access a window with live feed overviews and environment-specific analytics for utilization, process completion times, and performance against service-level agreements. Enterprise customers also get access to Wireframer, an automation builder that prepopulates automations and helps reduce design time by a claimed 70%, as well as Blue Prism Cloud IADA, which leverages AI to adjust on-premises and cloud resource utilization by taking into account network infrastructure and app performance.

Last year, Blue Prism launched a new AI engine, an updated marketplace for extensions (Blue Prism Digital Exchange), and a lab for AI research and development (Blue Prism Labs). Connectors to AI tools from Amazon, Google, and IBM joined marketplace tools that gave partners and customers the ability to create, share, and deploy plugins for the Blue Prism platform.

Blue Prism competes against heavyweights like UiPath, which in April nabbed $568 million at a $7 billion valuation for its suite of AI-imbued process automation tools, and Automation Anywhere, which raised $290 million at a $6.8 billion valuation. Elsewhere, Kryon secured $40 million, Softmotive pulled together a $25 million tranche from a host of investors, and Automation Hero secured $14.5 million.

But Blue Prism claims it has a leg up with respect to success and renewal rate. The company reports that 96% of customers opt to re-up service and that 90% of its certified partners report that they’re satisfied with the platform.

The funding — which brings Blue Prism’s total raised to nearly $200 million — comes after it achieved the fastest revenue growth of all large U.K. public software companies for the fourth consecutive year in 2019. This included an 83% increase in revenue to £101 million ($125.74 million) in the first half of 2019 and £137 million ($170 million) today. During that same time frame, Blue Prism’s customer base grew to 1,677 enterprise accounts as it added 700 new clients. (As of today, Blue Prism has 1,819 customers, including Microsoft, Accenture, Google, IBM, Heineken, and Jaguar Land Rover.)

In addition to its headquarters in Warrington, Blue Prism has offices in London, Austin, Sydney, Paris, Munich, and Washington. It employs just over 1,000 people.

Allergy Insights with Watson uses AI to predict allergy symptom risk

IBM keyboard logo
Image Credit: IBM

IBM today announced a new tool that taps AI to predict when allergy symptoms are likely to flare up. It’s called Allergy Insights with Watson, and it’s available in The Weather Channel app for iOS and Android ahead of a launch on the web.

In addition to a 15-day forecast that predicts allergy symptom risk (e.g., high, moderate, low) and a 3-day outlook for allergens, Allergy Insights delivers notifications when allergy risk is changing and explanations about how weather conditions can trigger symptoms. It also provides pollen levels by allergen (with mold coming soon), tips for managing allergies or reducing exposure, and news articles and editorial content related to allergies.

According to a recent survey conducted by IBM, most allergy sufferers — 60% — use weather forecasts to help manage and mitigate the worst of their symptoms. But pollen metrics like tree, grass, and ragweed levels, which the bulk of apps use to assess risk, aren’t necessarily good predictors, and their sources tend to be spotty.

That’s why IBM scientists trained the model underpinning Allergy Insights on data from IBM MarketScan, a family of anonymized health corpora representing over 100 million patients; location information; and weather attributes like temperature, humidity, rain, wind, and dew point. The geographical data enabled the model to understand what flora is growing nearby and when allergens will be produced, while removing references to the time of year helped reflect changes to the start of allergy season attributable to climate change.

The result? IBM claims Allergy Insights — whose predictions don’t reflect air quality levels — is between 20% to 50% more accurate than algorithms that take into account pollen alone. Moreover, it can predict allergy risk down to the ZIP code.

“After extensive research, pollen data and air quality levels were excluded from the predictive model, since they proved unreliable indicators of allergy risk,” said IBM. “While no two allergy sufferers are the same, knowing in advance when symptom risk might change can help anyone plan ahead and take action before symptoms may flare up … The team will continue to review pollen data and include it when it’s more reliable.”

Interestingly, it’s not the first instance of AI being applied to the problem of allergy and pollen prediction. In 2018, Doc.ai, which offers an app that connects health companies and medical researchers with smartphone users, built a model to anticipate allergy risk drawing on user data like BMI and physical activity. Separately, researchers at the University of Texas at Austin designed a device that measures pollen levels from specific locations throughout the day.

Adverity raises $30 million to collect, prep, and analyze marketing data

Adverity
Image Credit: Adverity

Adverity, a data analytics startup targeting applications in media, marketing, and ecommerce, today announced that it raised $30 million in equity financing, bringing its total raised to $50 million.

By accelerating R&D and growth within Adverity’s offices domestic and abroad, the fresh capital could help the company’s customers — among them Ikea, Red Bull, Unilever, MediaCom, and IPG Mediabrands — address the challenges AI and machine learning present with respect to productization. According to Algorithmia, 50% of companies spend between 8 and 90 days deploying a single AI model, with 18% taking longer than 90 days.

Adverity offers a cloud-agnostic data integration module that collects and transforms advertising, analytics, retail, social, and website data from various sources (including affiliate networks, web tracking tools, offline files, and TV audience metering systems), preparing it for further processing and analysis. Once standardized and stored, the data can be pushed to virtually any destination, including Adverity’s own Insights module or various on-premises solutions, cloud data warehouses, data lakes, and business intelligence tools.

The Insights module lets users create shareable dashboards for reporting KPIs, engagement metrics, marketing return on investment, business results, campaign performance, and growth patterns. Optionally, Insights can generate visualizations to elucidate trends over time, while PreSense — Adverity’s augmented analytics product — taps machine learning to analyze data, identify trends and anomalies, and deliver suggestions for improvements.

Adverity competes with a number of startups developing platforms that promise to unify and model marketing data. Panoramic emerged from stealth in September 2019 with $35 million in funding, shortly after predictive sales analytics company 6Sense announced a $27 million round. Pyze recently raised $4.6 million to further develop its suite of AI-driven analytics and marketing tools. There’s also Funnel, Superwise.ai, Dremio, and ActionIQ, to name a few others.

CEO Alexander Igelsböck says that over the past 12 months, Adverity — which is headquartered in Austria, with offices in New York and London — notched growth in annual recurring revenue of more than 100%. “Our platform plays a crucial role in helping enterprises become agile, empowering digital teams with intelligent insights,” he said. “It is imperative we invest in evolving and developing new solutions, improving access and quality, and tackle the challenges of data complexity.”

Sapphire Ventures led this latest funding round in Adverity — a series C — which saw participation from existing backers Mangrove Capital Partners, Felix Capital, SAP.iO, and Aws Gründerfonds. Igelsböck says the funding will be used to expand the company’s technology and commercial teams.

Igelsböck, Martin Brunthaler, and Andreas Glänzer cofounded Adverity in 2015 after launching a price comparison technology company that was acquired by Heise Media. Igelsböck and Brunthaler met each other at VeriSign; Glänzer had a sales role at Google and was the regional sales head at iProspect.